WiSleep: Scalable Sleep Monitoring and Analytics Using Passive WiFi Sensing
Priyanka Mary Mammen, Camellia Zakaria, Tergel Molom-Ochir, Amee Trivedi, Prashant Shenoy, Rajesh Balan
WWiSleep: Scalable Sleep Monitoring and AnalyticsUsing Passive WiFi Sensing
Priyanka Mary Mammen,Camellia Zakaria, Tergel Molom-Ochir,Amee Trivedi, Prashant Shenoy
University of Massachusetts, AmherstUSA
Rajesh Balan
Singapore Management UniversitySingapore
ABSTRACT
Sleep deprivation is a public health concern that significantlyimpacts one’s well-being and performance. Sleep is an inti-mate experience, and state-of-the-art sleep monitoring so-lutions are highly-personalized to individual users. With amotivation to expand sleep monitoring at a large-scale andcontribute sleep data to public health understanding, wepresent
WiSleep , a sleep monitoring and analytics platformusing smartphone network connections that are passivelysensed from WiFi infrastructure. We propose an unsuper-vised ensemble model of Bayesian change point detectionto predict sleep and wake-up times. Then, we validate ourapproach using ground truth from a user study in campusdormitories and a private home. Our results find
WiSleep outperforming established methods for users with irregu-lar sleep patterns while yielding comparable accuracy forregular sleepers with an average 79.5% accuracy. This is com-parable to client-side based methods, albeit utilizing onlycoarse-grained information. Finally, we show that
WiSleep can process data from 20,000 users on a single commodityserver, allowing it to scale to large campus populations withlow server requirements.
Sleep is a vital activity that significantly impacts human well-being, productivity and performance [36]. Prior research hasshown that 30% of the adult population does not get enoughsleep, with many adults sleeping less than 7 hours per day [13,24]. Both work-related stress and the increasing use of mobiledevices throughout the day, particularly in the evenings,have increased sleep disorders [39]. The repercussions ofsleep deprivation leading to serious health consequencessuch as heart disease, stroke, and depression [2, 32] havebecome a public health burden. The American Academyof Pediatrics confirms sleep deprivation as a public healthepidemic, especially among students [15, 32].Sleep is an intimate experience; hence many sleep mon-itoring technologies are highly personalized for individualuse. Monitoring data sources specific to sleep are challeng-ing to acquire for public health understanding and benefits [32]. Such information could benefit professional health ad-ministrators to keep abreast of a community’s needs andwell-being. In particular, college students who reside in on-campus dormitories make an insightful study population ofirregular sleepers due to overwhelming academic demands.Further, many college campuses, such as in the United States,are known for their social events during the semester. Theactive party culture exacerbates bad sleeping habits amongstudents [40]. These irregular habits can significantly andnegatively impact students’ concentration and academic per-formance [27].Numerous solutions have emerged for sleep monitoring.Polysomnography is a gold standard in medical research [37]practical for short-term monitoring. The consumer marketwitnessed growing alternatives for sleep trackers using ac-celerometers, heart rate sensors [4, 12], and soon, researchprototypes of in-ear devices [29] will follow. However, thereremains a general reluctance to use wearable while sleeping[33], making it challenging to support long-term monitoring.Contactless methods utilizing doppler radar and RF signalshave been proposed [19, 33], but they require specific in-strumentation in building infrastructures. Much prior workincludes using smartphone sensors, such as microphones,cameras, phone activity, and screen usage [10, 16, 17, 28, 46].Collectively, these client-side approaches require direct sens-ing of users’ devices. Unfortunately, such methods cannot beeasily scaled to large groups of users and would eventuallyface pushback over privacy concerns.Our work puts on a strong focus on the challenge of devel-oping sleep monitoring analytics at scale for public healthbenefits while still rendering personal wellness goals. In thisregard, scalability is important to support sleep monitoringfor a large community of users, such as on-campus studentresidents in a university. With the growing efforts in moni-toring students’ mental health and well-being using sensingtechnologies [42, 45], our work’s broader goal is to incorpo-rate sleep monitoring analytics into these efforts and offer aholistic community well-being service. a r X i v : . [ ee ss . SP ] F e b , Mammen et al. Approach Sleep Ability Contactless Supervised Deployment
Doppler [ ] ,RF [ , ] duration,quality yes yes buildingIn-ear [ ] duration/quality no yes wearablePhone activity [ , , ] duration yes yes smartphoneScreen activity [ ] duration yes no smartphone WiSleep duration yes no WiFi
Table 1: A comparison of prior approaches.
In this paper, we present
WiSleep , a scalable sleep moni-toring analytics platform utilizing coarse-grained WiFi eventsthat are passively sensed from the WiFi infrastructure. Specif-ically, when a user’s smartphone is connected to the WiFinetwork, it generates access point (AP) association and disas-sociation events, which we retrieve from the APs bypassingdirect mobile sensing of users’ devices. Therefore, we usethese events as a proxy for user activity and predict sleep du-ration by discovering periods of declining network activityfrom the user’s phone. We show that the passive observa-tions of AP associations are adequate to infer a user’s sleepschedule.Our key design goal is scalability to promote and sup-port adoptions at population-scale, for example, among largegroups of students on college campuses. Firstly, choosing anetwork-side approach helps scale our technology rapidlyto every device (and, by proxy every, user of these devices)as soon as they are connected to the WiFi network, withoutrequiring active client participation. Second, our approach issufficiently ready for immediate deployment without requir-ing additional hardware installation to the WiFi infrastruc-ture. Third, although we focus on college campuses as ourtarget community, we also demonstrate that our approachis general and work for users in private homes. This papermakes the following contributions:(1) We present a model based on Bayesian Change Pointdetection to predict sleep periods from utilizing coarse-grained network events of smartphones. Validationof our model using ground truth data from 16 usersresiding either in on-campus dormitories or privatehousing yielded an average accuracy of 79.5%, 74.8%precision, 87.1% recall, and 0.81 F-score. We find ourmodel robust to noisy data and detecting irregularsleep patterns, which are common among our targetedpopulation. This unsupervised method requires noprior training data, enabling a scalable approach tolarge user populations.(2) We investigate practical challenges by addressing con-founding factors from WiFi AP ping-pong effects andbackground network activity on smartphones. We clar-ify how device inactivity during the day does not af-fect our prediction results. Further, we demonstrate Pronounced why-sleep , an apt name for a system for monitoring sleepamong students.
WiSleep to scale from 15 to ten thousand users usingone server, processing one user in approximately 4seconds.(3) We conduct two case studies to show the value of ouranalytics platform for population and personal use. • Our first study analyzes 1000 on-campus studentresidents over a week, informing different studentgroups’ sleep behaviors. These findings can supple-ment public health’s understanding of sleep-relatedproblems. • A second longitudinal analysis of students over onesemester can help individuals understand their sleep-ing habits by the hour of day and day of the week.Our findings support prior evidence that irregularsleepers typically sleep for a longer duration onweekends to recuperate [7].
In this section, we present the important aspects of sleepmonitoring solutions and its prior efforts.Many sleep monitoring solutions are built on IoT andwearable devices to address individual users’ sensing require-ments. These devices are increasingly accepted for everydayuse, but they are not as ubiquitous as other mobile devicessuch as smartphones. This is an important considerationas scalability is a key capability of our system. Our focusis to facilitate population-scale sensing while maintainingpersonal use. In this work, we consider on-campus studentsresiding on campus as our large population sample. In whatfollows, we describe established approaches and how ourtechnique aims to address their shortfalls. These works aresummarized in Table 1.
Wearables.
Sleep monitoring over long periods has becomefeasible due to the availability of wearables. Consumer track-ers such as Fitbit [12] leverage accelerometer or heart ratedata. Researchers have explored novel methods such as in-earwearable sensors to precisely monitor sleep quality and dura-tion [29]. By design, wearables are appropriate for individualmonitoring. They can support population-scale monitoring,but all users must wear such devices and transmit the senseddata to the cloud for large-scale analysis. Fitness trackers arestill not ubiquitous despite their increased popularity andimpose a deployment cost for large users. More importantly, iSleep: Scalable Sleep Monitoring and Analytics Using Passive WiFi Sensing , , many users remain reluctant to wear an on-body devicewhile sleeping [33].
Contactless Techniques.
In contrast, contactless sleep moni-toring overcomes adoption pushbacks by installing sensorsin the environment (e.g., wall sensors). These efforts includeDoppler radar or RF signals to sense sleep patterns [19, 25,33]. While such techniques show significant promise, theyincur a higher cost for population-scale sensing due to theneed to deploy instrumentation in buildings (e.g., all dormrooms in a college campus).
Mobile Sensing.
The ubiquity of smartphones motivates manyresearchers to use phone-based sensors as sleep trackers.These works included microphones, cameras and phone ac-tivity logs [10, 16, 17, 28, 35, 46]. Others have shown monitor-ing screen activities can be an effective method to infer sleepand wake [1, 10] due to the strong correlation between (lackof) phone activity and user’s sleep. Such a method does notincur any hardware deployment costs due to the ubiquity ofsmartphones. However, client-side smartphone-based meth-ods face different challenges to scale up to a large numberof users. First, they require dedicated apps, which can be ahurdle at population scales . Second, longitudinal monitor-ing can be an issue when users change or upgrade phones.This practice is typical among tech-savvy student users, anddevice changes impose re-installation overheads, which ishard to automate. Detection Mechanism.
All of the above methods can be clas-sified as being supervised or unsupervised. Supervised ap-proaches require training data to build detection models.Since collecting large amounts of training data is challeng-ing, a supervised approach is generally harder to scale. Incontrast, unsupervised approaches, such as Bayesian meth-ods, do not need any training data and are easier to deploy ata population scale. For example, Khadiri et al. and Cuttone etal. employed unsupervised Bayesian inference to infer sleepperiods using different types of sensors [10, 22]. Similarly, anunsupervised approach is best in our case to build a detectionmodel without worrying about training models.
The shortcomings of prior work inform our decision to lever-age a network-based sensing approach. In what follows, wejustify our considerations to adopt WiFi-based sensing.
Passive WiFi Sensing.
Our work uses the Passive WiFi sens-ing method for sleep monitoring, which has previously been Mandating an app to be installed by all users is challenging to enforce, ashas been noted from the poor uptake of COVID-19 contact tracing appseven in places like Singapore where they were strongly recommended [6] employed for respiration monitoring [23], social interac-tion monitoring, and campus health and activity monitoring[18, 21, 45]. We hypothesize that network activity from a stu-dent’s phone is strongly correlated to the user’s activity andawake state, thus simply observing these network activitiesthrough coarse-grained AP association events, is sufficient toinfer sleep periods . Similar to techniques relying on phone’sscreen activity, we expect long periods of low network activ-ity are correlated to sleep periods.
Network-centric Approach.
Our approach is entirely network-based, employing device association and disassociation mes-sages generated by the APs in the network. This has severaladvantages over client-side methods: 1) Users do not needto download a dedicated mobile app, and our solution is im-pervious to any device change, 2) Many enterprise WiFi andhome WiFi routers provide logging capabilities of coarse-grained network events for the network’s performance andsecurity monitoring; in such cases, our approach can uti-lize these logs without the need to collect any additionaldata. Overall, our approach is based on coarse-grain AP-levelevents rather than fine-grained events (e.g., network packetrates), which would impose a higher monitoring overheadon the network. We also utilize existing WiFi networks andubiquitous smartphones, avoiding additional deploymentcosts. To the best of our knowledge, our approach is the firstto infer sleep periods using a network-centric method andAP-level network events.
Figure 1: Smartphone network events over 24 hours,with low event rate corresponding to sleep.
Figure 1 illustrates a time-series example of a user’s smart-phone network events over a 15-minutes interval through-out a 24-hour period. Consider a typical smartphone usagewhere it is connected to WiFi for online communication. Indoing so, the device connects to a nearby AP. The device willperiodically re-associate to stay connected to the best APfor as long as the user needs the connection, thus, trigger-ing a sequence of association and disassociation events. Thedevice eventually falls into a power-saving state when theuser stops interacting with it. Periodically, the device ‘wakesup’ (e.g., every 15 to 30 minutes)and performs a networkscan that triggers a re-association. The fluctuations in theseevents help us predict the user’s activity and state. The mainchallenge is in determining which period of low networkactivity should accurately infer a user as (actually) sleeping. , Mammen et al.
Figure 2: Potential sensing errors between ceas-ing/resuming phone activity and sleep/wake onset.
The key assumptions of our work are frequent smartphoneand WiFi usage – as humans grow increasingly reliant ontheir smartphones [41], much of the common online accesssuch as video streaming, mobile gaming, and virtual commu-nication demand low latency and high bandwidth networksthat WiFi can offer [38]. Because of this, WiFi is more pre-ferred as home network solution, running more efficientlyin the long run than relying on cellular networks [43].Clearly, assuming that a reduction in WiFi network ac-tivity corresponds to sleep can be highly erroneous. We listseveral situations where noise can be introduced: (1) A usermay be awake but did not immediately use their phones. (2)A user waking up briefly in-between sleep and using theirphone, thus missing periods of actual sleep. (3) A user mayhave long inactive periods because they are not utilizingtheir phones, but they are awake. Further, our approach issusceptible to confounding factors, (4) when the smartphoneswitches between nearby APs for the most optimal WiFiconnection, it produces what is known as the ‘ping-pongeffect,’ (5) software updates that automatically run on thesmartphone may be incorrectly inferred as user activity.Overall, our predictions in sleep and wake-up times areexpected to be less accurate. As illustrated in Figure 2, ceas-ing phone activities before bedtime does not immediatelytranslate to sleep onset, as users may take some time to fallasleep. For these reasons, it is difficult to tackle our work asa simple binary classification problem where the longest se-quence of low activity periods over a day is determined as thesleeping period.
In the rest of the paper, we describe how these challengesare addressed and can support compelling utility at a population-scale. We demonstrate through our primary use-case, focus-ing on sleep monitoring for the student population and itsutility for the students’ health and well-being services. Asupplementary study on a private home shows how our ap-proach remains relevant for personal use.
Our detection mechanism is an ensemble method based onBayesian change point model. In what follows, we describethe problem statement to build our model.
Problem Statement.
As stated in Section 3.1, we assume theprimary user’s device to be smartphones. Consider an enter-prise WiFi network deployed in a university campus with M access points and N users. We model each user as being in one of two states: awake or sleeping . When the user is‘awake’, they can either be mobile (moving from one loca-tion to another) or localized at a given area and assumedto use their phone from time to time frequently. In eithercase, the phone generates AP association and disassociationevents logged by the network, as discussed in Section 3.1.The phone may also generate authentication events (e.g.,when adopting enterprise RADIUS authentication) to verifyusers using their credentials. The network also logs theseauthentication events.With a 24-hour trace of time-stamped WiFi events, we usethis trace to compute the rate of network events ; we dividethe 24-hour period into time slots and count the number ofevents in each slot. Let 𝑤 𝑡 denote WiFi event rate seen attime 𝑡 and let 𝑏 denote the slot size (we choose a defaultslot size of 𝑏 =15min, yielding 96 slots per day). Given a timeseries of event rates 𝑤 𝑡 , our problem is to estimate the sleeponset time, 𝑇 𝑠𝑙𝑒𝑒𝑝 , and the wake-up time, 𝑇 𝑎𝑤𝑎𝑘𝑒 for the user. We estimate the sleep and wake-up times from WiFi eventsbased on Bayesian Change Point detection, well establishedto detect significant changes in time-series data and havebeen widely used for anomaly detection. As illustrated inFigure 1, we must as accurately detect a significant drop inthe phone’s network activity that occurs at sleep time anda corresponding rise that occurs upon a wake time. Hence, 𝑇 𝑠𝑙𝑒𝑒𝑝 and 𝑇 𝑎𝑤𝑎𝑘𝑒 are significant change points that we mustdetect in our time series data 𝑤 𝑡 , based on Bayesian inferenceof change points.We model 𝑤 𝑡 as a Poisson process (i.e, a time series ofevent rates in a time slot is Poisson), where 𝜆 is the mean ofthe distribution. 𝑃 ( 𝑤 ) = 𝑃𝑜𝑖𝑠𝑠𝑜𝑛 ( 𝑤, 𝜆 ) = 𝜆 𝑤 𝑒 − 𝜆 𝑤 !Since the mean event rate 𝜆 drops at sleep onset time 𝑇 𝑠𝑙𝑒𝑒𝑝 and rises at wake-up time 𝑇 𝑎𝑤𝑎𝑘𝑒 , therefore 𝜆 𝑠𝑙𝑒𝑒𝑝 and 𝜆 𝑎𝑤𝑎𝑘𝑒 denote the mean event rate when a user is asleep and awake. 𝜆 = (cid:26) 𝜆 𝑠𝑙𝑒𝑒𝑝 , if 𝑇 𝑠𝑙𝑒𝑒𝑝 ≤ 𝑡 < 𝑇 𝑎𝑤𝑎𝑘𝑒 𝜆 𝑎𝑤𝑎𝑘𝑒 , Otherwise (1)Since the mean event rate 𝜆 𝑎𝑤𝑎𝑘𝑒 is high when the useris awake and the event rate 𝜆 𝑠𝑙𝑒𝑒𝑝 is low when asleep (seeFigure 1), we assume that 𝜆 follows a gamma distributionwith the following density function. Γ ( 𝜆, 𝑎, 𝑏 ) = Γ ( 𝑎 ) 𝑏 𝑎 𝜆 𝑎 − 𝑒𝑥𝑝 (− 𝑏𝜆 ) Given these assumptions, we need to detect two changepoints 𝑇 𝑠𝑙𝑒𝑒𝑝 and 𝑇 𝑎𝑤𝑎𝑘𝑒 when the event rate in the time se-ries transitions from 𝜆 𝑎𝑤𝑎𝑘𝑒 to 𝜆 𝑠𝑙𝑒𝑒𝑝 and vice versa. Bayesian iSleep: Scalable Sleep Monitoring and Analytics Using Passive WiFi Sensing , , change point detection involves finding the posterior dis-tribution of the change points for different values of 𝑡 andmaximizing it to derive the Maximum A Posterior Estimates(MAP). This is done by using a Metropolis-Hastings algo-rithm [8] to estimate these parameters for each value of 𝑡 andchoosing the 𝑡 that corresponds to MAP as the change point.As in any Bayesian approach, we need to assign priors tothe model parameters (i.e, 𝜆 𝑠𝑙𝑒𝑒𝑝 , 𝜆 𝑎𝑤𝑎𝑘𝑒 , 𝑇 𝑠𝑙𝑒𝑒𝑝 , 𝑇 𝑎𝑤𝑎𝑘𝑒 ) andthen use Metropolis sampling to derive posterior conditionaldistribution of each parameter from its joint distribution.As noted earlier, the value of 𝑡 where the distribution ismaximized (MAP) represents the change point 𝑇 𝑠𝑙𝑒𝑒𝑝 (and 𝑇 𝑎𝑤𝑎𝑘𝑒 ). The need for our Bayesian approach to be robust to noisyWiFi data and irregular sleep patterns (see Section 3.1) makesit challenging to build a model with strong priors – conse-quently, models with weak (or non-informative) priors im-pact model accuracy. Accordingly, we employ an ensemblemethod comprising three separate models, each with priorssuitable for different scenarios, and finally, apply BayesianModel Averaging (BMA) [14] to derive a combined estimate.The composition of our ensemble model is:
Model 1) Bayesian Model with Location-based Non-informativePrior. assumes that the sleep periods occur in one or a smallsubset of locations, such as a dorm room. The location infor-mation is inferred directly from the AP placements withoutlocalizing the device itself. Priors for a particular day arechosen based on the times spent at these locations. Thismodel is useful for users who have irregular sleep hoursbut consistent sleep locations. Such location based priorsavoid choosing time periods spent outside the dorm areasfor possible sleep periods.To specify the prior for a specific day, we assume themapping of all campus APs to their building locations areknown a priori and only consider a subset of APs located inthe residential dorms. For every user’s 24-hour WiFi trace,we determine the longest duration spent in a dorm building(based on network activity observed by the dorm APs). Note,however, this assumption ignores sleeping behaviors outsidethe dorm area.Let [ 𝑇 𝑠𝑡𝑎𝑟𝑡 ,𝑇 𝑒𝑛𝑑 ] denote the time-interval spent in dormareas, 𝑘 hours as the minimum sleep duration (e.g., 𝑘 = 𝑇 𝑠𝑙𝑒𝑒𝑝 and 𝑇 𝑎𝑤𝑎𝑘𝑒 are uniformly distributed within [ 𝑇 𝑠𝑡𝑎𝑟𝑡 ,𝑇 𝑒𝑛𝑑 ] . Hence, themodel priors are given as: 𝑇 𝑠𝑙𝑒𝑒𝑝 ∼ 𝐷𝑖𝑠𝑐𝑟𝑒𝑡𝑒𝑈 𝑛𝑖 𝑓 𝑜𝑟𝑚 ( 𝑇 𝑠𝑡𝑎𝑟𝑡 ,𝑇 𝑒𝑛𝑑 − ) 𝑇 𝑤𝑎𝑘𝑒𝑢𝑝 ∼ 𝐷𝑖𝑠𝑐𝑟𝑒𝑡𝑒𝑈 𝑛𝑖 𝑓 𝑜𝑟𝑚 ( 𝑇 𝑠𝑡𝑎𝑟𝑡 + ,𝑇 𝑒𝑛𝑑 ) The event rate while awake is assumed to be 2.5 events/binyielding a prior: 𝜆 𝑎𝑤𝑎𝑘𝑒 ∼ 𝐺𝑎𝑚𝑚𝑎 ( . , ) The event rate while sleeping is assumed to be a low non-zero rate: 𝜆 𝑠𝑙𝑒𝑒𝑝 ∼ 𝐺𝑎𝑚𝑚𝑎 ( , ) Model 2) Bayesian Model with Normal Prior. assumes thatthe sleep onset and wake-up times are normally distributed(rather than uniformly distributed as in the previous model),thus suited for users with regular sleep and wake-up times.Let 𝑇 𝑠𝑡𝑎𝑟𝑡 and 𝑇 𝑒𝑛𝑑 denote the start and end times of theirdaily sleep period. 𝑇 𝑠𝑡𝑎𝑟𝑡 and 𝑇 𝑒𝑛𝑑 are normally distributedwith a standard deviation 𝜎 . Assume that a student goesto sleep at 12:00 am and wakes up at 8:00 am the next day,with a standard deviation of 3 hours ( 𝑇 𝑠𝑡𝑎𝑟𝑡 =12am, 𝑇 𝑒𝑛𝑑 =8am, 𝜎 = 𝜆 𝑠𝑙𝑒𝑒𝑝 and 𝜆 𝑎𝑤𝑎𝑘𝑒 are the same for allmodels. Hence, the model priors are given as: 𝑇 𝑠𝑙𝑒𝑒𝑝 ∼ 𝑁𝑜𝑟𝑚𝑎𝑙 ( 𝑇 𝑠𝑡𝑎𝑟𝑡 , ) 𝑇 𝑎𝑤𝑎𝑘𝑒 ∼ 𝑁𝑜𝑟𝑚𝑎𝑙 ( 𝑇 𝑒𝑛𝑑 , ) Model 3) Bayesian Model with Hierarchical Prior. is usefulwhen sleep behavior changes based on the day’s events,resulting in varying standard deviation.Let 𝑇 𝑠𝑡𝑎𝑟𝑡 and 𝑇 𝑒𝑛𝑑 denote the start and end times of a sleep-ing period, normally distributed as per Model 2 ( 𝑇 𝑠𝑡𝑎𝑟𝑡 =12am, 𝑇 𝑒𝑛𝑑 =8am). As sleep behavior varies based on the day’s events, 𝑇 𝑠𝑙𝑒𝑒𝑝 and 𝑇 𝑎𝑤𝑎𝑘𝑒 can be derived by adding hyper-priors 𝛼 𝑡 , 𝛽 𝑡 and 𝜏 𝑡 to the normal priors. We set the hyper-priors to anon-informative distribution since we have no strong knowl-edge about them. The priors for 𝜆 𝑠𝑙𝑒𝑒𝑝 and 𝜆 𝑎𝑤𝑎𝑘𝑒 are thesame for all models. Hence, the model priors are given as: 𝛼 𝑡 ∼ 𝐸𝑥𝑝𝑜𝑛𝑒𝑛𝑡𝑖𝑎𝑙 ( ) 𝛽 𝑡 ∼ 𝐸𝑥𝑝𝑜𝑛𝑒𝑛𝑡𝑖𝑎𝑙 ( ) 𝜏 𝑡 ∼ 𝐺𝑎𝑚𝑚𝑎 ( 𝛼 𝑡 , 𝛽 𝑡 ) 𝑇 𝑠𝑙𝑒𝑒𝑝 ∼ 𝑁𝑜𝑟𝑚𝑎𝑙 ( 𝑇 𝑠𝑡𝑎𝑟𝑡 , 𝜏 𝑡 ) 𝑇 𝑎𝑤𝑎𝑘𝑒 ∼ 𝑁𝑜𝑟𝑚𝑎𝑙 ( 𝑇 𝑒𝑛𝑑 , 𝜏 𝑡 ) Once all models are utilized for change point detection,these results are averaged using Bayesian Model Averag-ing. All models are weighted using a marginal likelihoodwhere the weights are sensitive to the prior distribution.We generate the weights from the posterior distribution ofthese models using the Watanabe-Akaike Information Cri-teria (WAIC) [44]. WAIC relies on the complete posteriordistribution rather than on a single point estimate, making ita more robust approach for generating a combined estimatefrom the ensemble predictions. , Mammen et al.
We now describe how our model is integrated to deliver
WiSleep , our sleep monitoring and analytics platform.
WiSleep , illustrated in Figure 3, comprises four key compo-nents. The first is
WiFi Data Collection Engine , whichgathers all association and disassociation events from a WiFinetwork. Second, the
Pre-processing Engine anonymizesall device MAC address in the raw WiFi event logs, createsevent traces specific to a user’s primary device and classi-fies on-campus student residents based on several heuristics.
Change Point Detection Engine is the third and maincomponent, which converts the 24-hour WiFi trace for eachuser into a time-series of event rates and perform changepoint detection using our ensemble model (see Section 4 fordetails). Fourth and finally, our
Analytics Platform pro-vides different levels of descriptive information to identifytrends in sleep patterns.
Figure 3: System overview of
WiSleep . We have built a prototype of our system and deployed iton a university campus in the Northeastern United States(blinded for double-blind review). We discuss implementa-tion of
WiSleep in the context our campus deployment next.
WiSleep assumes the utilization of a WiFi network. In anenterprise setting such as campus, network of APs do thelogging, whereas in a residential setting, typically a single APor a WiFi mesh log the events. Our campus deployment con-sists of 5500 HP/Aruba wireless APs, managed by 7 wirelesscontrollers. We leverage each AP’s built-in logging capa-bilities that generate distinct types of syslog messages [5].The syslog data is ultimately sent to a central syslog serverfor data aggregation of multiple IT systems and networkcomponents.
WiSleep utilizes event types specific to deviceassociation , disassociation , re-association and successful au-thentication [5]. In a residential setting, logging is done byindependent APS in each home separately uploading the logsto a prep server. Scalability:
WiSleep has no specific data collection scalabilitychallenges to overcome for two reasons. First, enterprisenetworks are already designed to log events at a population-scale. For example, our campus WiFi network generates 2 GB of syslog data comprising up to 11.5 million total eventsfrom approximately 58,000 devices and 5,500 APs on a typicalweekday. Second,
WiSleep can use real-time location system(RTLS) reports the same way as syslog data. Specifically,reports of all devices by the RTLS are treated as associationevents. If a device reportedly disappears from an AP, it istreated as a disassociation event.
WiSleep can thus use eitherRTLS or syslog data equipped in existing WiFi networks,such as Cisco and Aruba [20].
Our pre-processing engine takes in the syslog data (withanonymized MAC address) as input. Note, anonymization isperformed on our campus IT department’s server before datais copied to our system. The engine proceeds by partitioningevent logs to construct per-device event logs of each user’sprimary device; the primary device is one that makes thelargest number of daily AP associations (e.g., over a week).We maintain an up-to-date list of user devices to avoid pullingWiFi events from secondary and/or obsolete devices (e.g., auser may change their smartphone to a new model). Finally,we apply a heuristic to identify devices with high activitypresence in dorm areas as on-campus student residents. Thepre-processing engine is written in python using 900 linesof code.
Processed per-device event logs are input for our detectionengine. It computes WiFi event rates in 15 minutes timeslots, spanning from 18:00 hours to 17:59 hours the next day.Accordingly, our model predicts the sleep and wake-up timeof users and delivers population-scale and individual-levelanalytics. We describe our model’s performance results inSection 6 and demonstrate our predictive analytics throughseveral case studies in Section 8.
System Performance Metric:
Two performance measures forour sleep monitoring analytics platform are accuracy and timeliness . As reasoned in Section 4, our engine runs onan ensemble of models based on Bayesian change point de-tection to yield more acceptable accuracy despite workingwith weak priors. In Section 6, we present results from com-paring the efficacy of
WiSleep compared to three baselinetechniques (i.e., rule-based, normal and hierarchical priors)and tabulate the prediction accuracies in Table 3.To achieve timeliness in delivering a population-scale ana-lytics solution, our model utilizes Metropolis-Hashtings algo-rithm [8], which estimates the parameters 𝑇 𝑠𝑙𝑒𝑒𝑝 and 𝑇 𝑎𝑤𝑎𝑘𝑒 for one user in approximately 4 seconds. We demonstratein Section 7.4 how WiSleep is computationally efficient inproducing predictive analytics of 10,000 on-campus studentresidents under 12 hours. While a single server is adequate iSleep: Scalable Sleep Monitoring and Analytics Using Passive WiFi Sensing , , to handle the processing needs on our campus, WiSleep usesa cluster to scale to larger user populations by parallelizingthe analysis of user device traces across servers . In a practi-cal use-case for our campus health administrators, WiSleep can generate reports of sleep deprivation quickly enough torender pertinent insights.
Our system extends descriptive analytics of users’ predictedsleep data to provide insight into sleep patterns. Section 8demonstrates several ways our data can be represented andhow our findings support prior research on sleep studies,particularly on college students. Further, in Section 9, wediscuss how our analytics feature can be operationalized toseveral end-users for public health and personal use whileupholding ethical considerations.
We experimentally evaluate our model by first, assessingmodel performance from conducting a study among usersliving in campus dorms and private housing. Next, we com-pare model performance with other rule-based and Bayesiantechniques.
Dataset Number of User Residents Ground/Time Truth
User Study Fall’19,on-campus student (3M,1F) Fitbit data,8 days, Sprg’20,on-campus student (7M,4F) Diary logs(avg.) Fall’20: home resident (1M)Case Study Anonymous students (1000) -7 daysPopulation Anonymous students (7000) -1 day
Table 2: Datsets used for our experiments.
WiSleep has been deployed on our campus and gatheringevent logs of all connected devices. Our university has over31,000 students and close to 14,000 on-campus student resi-dents. With approximately 58,000 detected devices, we an-ticipate 14,000 of these devices to be applicable for our sleepmonitoring analyses.Table 2 summarizes our datasets. The first is an IRB-approveduser study conducted among 15 on-campus undergraduateresidents. We held in-class advertising to recruit studentsover two semesters (Fall 2019 and Spring 2020). We preciselyidentify these users’ hashed MAC addresses by monitoringtheir WiFi events from a dedicated AP on-campus. Eachstudent was given a Fitbit device for tracking their sleep,providing ground truth data. The second is for off-campus Each server is a Dell PowerEdge R430 with 16 core 2.10 GHz Intel Xeonprocessor, 64GB RAM, 10 gigE network connections and local 1TB disk. private housing for 1 homeowner. His event logs were col-lected from a home WiFi router (Note: event log collection ispossible in any programmable router) with the MAC addressspecified. We also asked participants from both groups tomaintain written logs of their sleep and wake times for addedverification.The other dataset consists of per-devices event logs of7,000 students for a given day. Note that this dataset is onlyused for our scalability analysis in Section 7.4. The finaldataset consists of 1000 anonymous student dataset collectedover a week to provide an illustrative case study in Section 8,on the types of sleep analytics
WiSleep can deliver.
Ethical Considerations. : This paper’s data collection and anal-ysis were conducted under safeguards and restrictions approvedby our Institutional Review Board (IRB) and Data Usage Agree-ment (DUA) with the campus network IT group. All device MACaddresses and authentication information are anonymized us-ing a strong hashing algorithm. User identities were blindedby assigning numeric identifiers. Ground truth was collectedwithin the IRB approved protocol. It is important to note thatour population-scale analysis was performed on aggregate dataof anonymous users. Individual analyses were performed onusers who had consented to this study.
Our first experiment aims to validate our approach and uti-lizes ground truth data from the user study dataset. We com-pare our prediction values, 𝑇 𝑠𝑙𝑒𝑒𝑝 and 𝑇 𝑎𝑤𝑎𝑘𝑒 , with the groundtruth Fitbit data and compute four metrics: Accuracy, Pre-cision, Recall, and F-score. Accuracy is the proportion ofcorrect predictions (both sleeping or awake periods) relativeto all predicted sleeping or awake periods. Precision is theratio of all correct sleep/awake periods to the total numberof predicted sleep/awake periods. F-score indicates the opti-mal balance that maximizes precision and recall (a score of 1indicating a perfect predictor). Accuracy Precision Recall Fscore
WiSleep
Table 3:
WiSleep ’s performance compared againstthree baselines.
Table 3 summarizes our results, with our technique ob-taining an average accuracy of 79.5% (+/- 6.3%, max: 90.2%,min: 69.6%), 74.8% precision, 87.1% recall, and 0.81 F-score inpredicting sleep for participants in the user study. We furthercompare this performance to a baseline rule-based heuristicand state-of-the-art Bayesian approach in the next section. , Mammen et al. (a) Best case (b) Worst case
Figure 4: Sleep behavior of two students that represent the base case (a) and worst case (b) for
WiSleep ’s approach.Shaded area denotes WiFi events detected in the respective residential area.Figure 5: Time difference between predicted sleep(left) and wake time (right) using
WiSleep and groundtruth for participants, P13 and P14.
To better understand scenarios where
WiSleep performswell or yields higher errors (as its accuracy varies from 69.6%to 90.2%), we show data from two specific users in Figure 4that represent the best and worst case for
WiSleep . Figure4(a) depicts the daily event trace for user 𝑃
14 (with a high85.8% accuracy). The user exhibits near-ideal behavior wheresleep onset occurs soon after ceasing phone activity, whilethe resumption of phone activity occurs soon after wakingup.In contrast, Figure 4(b) depicts participant 𝑃
13 who ex-hibits behavior that is the worst case from
WiSleep ’s per-spective – yielding errors of approximately 3 hours. In par-ticular, the participant’s phone exhibits clear WiFi activityfor several hours after the initial sleep onset. The partic-ipant appears to exhibit restless sleep, waking up a fewtimes in-between and use their phone; the resulting net-work activity is erroneously interpreted as an awake periodby
WiSleep . Similarly, the participant wakes up around 6am as per ground truth data but did not immediately usetheir phone until 9 am, thus confuses
WiSleep . This exam-ple highlights a key limitation of
WiSleep – in particular
WiSleep assumes that appropriate network activities fromtheir phone will reflect users’ physical behavior; that is, ifthe user is asleep, the phone will reduce its overall activityand vice versa. However, if this assumption is not true (as inthe case for user 𝑃 WiSleep will drop. Theoverall accuracy for participant 𝑃
13 over the entire studywas 76.8% and 85.8% for participant 𝑃
14, as shown in Figure5, with user 𝑃
13 consistently showing worse performance.It is important to note
WiSleep accounts for transient net-work activity in its approach; such activity is effectively noisefor our system caused by phone apps and services that inde-pendently activate without user behavior. For example, pushnotifications arriving when the user is asleep or backgrounddownloads are among the many common sources of noise.We particularly address these issues in Section 7.1 and 7.2.
Private Home Use.
To demonstrate the applicability of oursystem in a private setting, we tested
WiSleep with one homeuser and one WiFi AP deployed. In a typical home networksetup, a user will have the option to set their mean bed andwake-up times as part of initializing
WiSleep . For our user,these times were set to 11:00 pm and 7:00 am, respectively. Itis important to note that in this study, we are only trackinga single home user. We discuss in Section 9 how multiplehome users can be monitored by
WiSleep .Upon running our model for a period of one week,
WiSleep successfully yields approximately 85% accuracy. Figure 7charts the number of WiFi events detected from the user’sprimary device, and accordingly, his predicted sleep dura-tion and ground truth for two different days, 𝐷 and 𝐷 . Asobserved in 𝐷 , WiSleep predicts the sleep time to be 10:45pm despite the phone falling inactive since 7:30 pm. Thepredicted wake-up time is 6:15 am, that is, 45 minutes ear-lier than the initialized wake-up time as the phone networkactivity was recorded to be active once again. 𝐷 illustrates a key aspect of our implementation. It isimportant to note that a completely inactive period of ouruser will not result in WiSleep falsely predicting sleep. In fact,on 𝐷 , the user was confirmed to not be present at homebetween 8:00 am and 6:00 pm. For this reason, the WiFinetwork captured no network activity, including periodicpings, which would otherwise be recorded had the user (andhis primary device) been physically present. Next, we compare
WiSleep to three other techniques: a rule-based heuristic and two state-of-the-art Bayesian methods.Our rule-based heuristic first determines a user’s residen-tial dorm. It classifies the time (slot) spent in their dorm asactive or inactive by checking if the observed WiFi rate > et al. [11]. The second utilizes hierarchical priors based on atechnique by Cuttone et al. [9]. Table 3 compares the efficacyof the four methods. Overall, WiSleep achieves the highestaverage accuracy of 79.5%, which is marginally better than iSleep: Scalable Sleep Monitoring and Analytics Using Passive WiFi Sensing , ,
Figure 6: Box plots comparing the predicted time diff. in sleep, wake-up and sleep duration with ground truth fordifferent methods:
WiSleep , Bayesian methods with Normal and Hierarchical priors, and the rule-based heuristic.Figure 7:
WiSleep ’s performance for a user in a homenetwork. the Normal (77.6% accuracy) and Hierarchical (78.9% accu-racy) methods; all three Bayesian methods outperform therule-based heuristic (68.8% accuracy).
Figure 8: Varying sleep patterns of participant, P15.
The box plots in Figure 6 show plots of time differencesbetween predicted sleep and wake-up times, compare thepredicted sleep duration with ground truth values. Our re-sults indicate that
WiSleep and the Hierarchical method tendto perform better in predicting sleep time (average timedifference of
WiSleep : 102 minutes, Normal: 146 minutes,Hierarchical: 94 minutes), while
WiSleep and the Normalmethod perform better in predicting wake-up times (averagetime difference of
WiSleep : 32 minutes, Normal: 36.5 minutes,Hierarchical: 62.5 minutes). The rule-based heuristic is lessrobust to noisy data and yields the highest errors. This resultshows an ensemble model’s ability to match the performanceof a better Bayesian model depending on its scenario.To illustrate the benefits of an ensemble approach, let usconsider user 𝑃
15, who demonstrates changing sleeping pat-terns (refer Figure 8) with significant day-to-day variations inthe ground truth data provided.
WiSleep yields a higher accu-racy of 84.8% over the other two Bayesian methods (Normal:74.9%, Hierarchical: 78.5%) because our ensemble method isalready designed to handle these exceptions.
In Section 5.4, we presented accuracy and timeliness as twoperformance measures for our sleep monitoring analyticsplatform. Here, we evaluate the impact of noisy data andlarger numbers of users on the efficacy of our approach.
Figure 9: Ping pong events during a sleeping period foruser P11. Green shaded area denotes the ground truthfor the user’s sleeping period. Cyan horizontal line de-notes the primary AP that user is usually connected to,while black horizontal lines denote other APs in closeproximity.
When a stationary device sees multiple APs with similarRSSI values, it may switch back and forth between them,causing a “ping-pong” effect. The noise from this effect canresemble network activity despite the absence of user inter-actions. To avoid such instances, we groups APs in an area,such as a dorm floor, and filters out patterns that resembleping-pongs between nearby APs. To demonstrate the heuris-tic, consider user P11 from our user study, whose phoneexhibits significant ping pong noise. Figure 9 illustrates auser’s WiFi activity from different APs. We observe multipleping-ponging events between 3:00 am - 4:00 am, but the con-nection remains consistent with the primary AP throughoutthe rest of the sleep period. Without our heuristic,
WiSleep would have predicted sleep only after the connection sta-bilizes at 4:00 am, with a 45 minutes delay from the actualsleep time (instead of 3:00 am which is closer to true value).
Background activities such as push notifications, softwareupdates etc. can introduce noise by appearing to have activenetwork usage when a user is asleep. Since such activities’frequency is typically low, our ensemble method is resilientto noise introduced by their presence. Also our validationstudy using real Android and iOS devices where such ac-tivities are already present demonstrate the efficacy of ourapproach despite such confounding factors. To evaluate theimpact of such noise, we carefully introduced backgroundactivities in a controlled fashion. Here, we used an Androidphone, alternating between long periods of idle, followed , Mammen et al.
Figure 10: Ability of our approach to handle noisefrom a phone’s background network activities. Shadedarea denotes WiFi events detected in a residential area. by some push notifications, and finally, a mobile app down-load from the Play store. We created a synthetic device tracewhere we inserted this noisy trace into an actual device dur-ing a nightly sleep period.The synthetic trace, shown in Figure 10, was then sub-jected to our change point detection method. As shown, thesleep and wake-up times before and after the noise injectionare quite similar ( ≈
15 minutes difference in wake up time).This demonstrates
WiSleep ’s ability to be robust to a modestamount of noise from background activities.
Figure 11: Impact of inactive periods on
WiSleep ’smodel performance in non-residential (red shade) andresidential (blue shade) areas.
There can be multiple device inactivity periods for a userin a day, leading to false positives.
WiSleep accommodatesfalse positives by picking only the relevant inactive periodsusing priors for sleep and wake-up times, and then consider-ing a user’s physical presence in their residential area. Forinstance, in Figure 11, we observe that a user was inactive attwo time periods; first, 8:15 pm to 6:00 am and second, from8:15 am to 5:30 pm. Inactivity between 8:15 pm to 6:00 am istypically classified as the eventual sleeping duration, primar-ily because the user is in residence. However, this exampleillustrates a different case – the user is in residence between8:15 am to 5:30 pm (this is highlighted in light blue area). Asimilar situation can be seen in 7b), where
WiSleep was ableto identify network absence, thus avoiding false positives.
In a real-world implementation of a sleep monitoring solu-tion for on-campus student residents,
WiSleep needs to scaleto tens of thousands of users present on campus. Next, weevaluate the scalability of the
WiSleep system to support a large number of users under accuracy and timeliness con-straints.To validate our argument, we examine two factors –1) the number of samples needed for computation and 2) theCPU cost of the sampling process. First, we determine thenumber of samples needed for each user to create accurateestimates in the sampling process employed by
WiSleep .Generally, the more samples used, the higher the accuracy.However, we must also consider that higher samples willresult in higher CPU cost, affecting the results’ timeliness.
Figure 12: Accuracy and CPU overhead of changepoint detection for various sample size.
Figure 12 shows the accuracy and the CPU cost of thecomputation for two different users obtained by varyingnumber of samples from 10 to 2000 over a period of one week.We observe that using between 10 to 50 samples yields anaccuracy of approximately 85%, which does not significantlychange as the sample size is increased. Naturally, the moresamples used, the higher the CPU cost. The results showthat a good accuracy – computation tradeoff for
WiSleep isto use 50 samples producing an accuracy of 85% with a CPUprocessing cost of approximately 4 seconds per user.(a) distribution of time taken per user(b) CPU overheads for 1,000 to 20,000 users
Figure 13: WiSleep scales to >
20k users on a singleserver
Next, we examine how
WiSleep scales when processing alarge number of users. Figure 13 shows that the CPU timescales linearly with the number of users, and prediction cycle iSleep: Scalable Sleep Monitoring and Analytics Using Passive WiFi Sensing , , is completed in 23 hours for 20,000 users, thus showing that asingle server is sufficient to handle all on-campus students atour university Hence,
WiSleep can generate reports of sleepdeprivation of a large number of users quickly enough torender pertinent insights on the same day. One key point hereto mention is that our system currently uses unoptimizedpython libraries for Bayesian inference and does not useany hardware accelerators such as GPUs. Additionally, thecomputation is highly parallelizable and can be scaled near-linearly by using a cluster of servers. WISLEEP
ANALYTICS
We present insights from two case studies to demonstratehow our population-scale aggregate analytics can benefitpublic health and personal use.
Using our case study dataset of 1000 anonymous studentusers, we conduct an aggregate-level analysis of their sleepbehavior for one week. Figure 14 plots the average sleepduration of all users by weekday. Our results support ex-isting findings on college students reporting longer sleepduration over the weekends [7]. Specifically, we recognize adeclining trend of sleep at the beginning of the week, beforegradual increments later in the week, and a sharp and stableincrement over the weekend. The decrease in sleep durationon weekdays was likely due to various academic demands,typically fulfilled while juggling class hours.
Figure 14: How do aggregate sleep patterns vary by dayof the week? Mean sleep duration predicted for 1000anonymous users using
WiSleep . Digging deeper, we want to understand the fraction ofstudents who have irregular sleep patterns and how suchusers’ sleep patterns vary compared to students with regularsleep patterns. We adapted the consistency metric proposedby Rashid et al. [34] to generate a sleep consistency scorebetween 0 to 1 for each user — 1 denotes the user as havingregular sleep patterns throughout the week. We applied amedian-split to determine the threshold for categorizingusers into groups with regular sleep patterns (score 0.61 to1) and irregular sleep patterns (0 to 0.6).First, we find that 839 students out of our 1000 studentdataset have irregular sleep patterns. Next, Figure 15 com-pares the sleep duration for users with regular and irregularsleep patterns, spread out on weekends versus weekdays.
Figure 15: Box plots comparing the predicted sleep du-ration difference between users with regular and irreg-ular sleep patterns on weekdays and weekends.
The box plot for users with regular sleep shows that the me-dian sleep duration is approximately 9 hours on weekendsand weekdays. In contrast, users with irregular sleep pat-terns show 1.5 hours less sleep on weekends. Most regularsleep users take between approximately 7 to 11.5 hours ofsleep on weekends, far different from irregular sleep userswho took between 5.5 hours to 10.6 hours of sleep. Overall,both groups show similar sleep patterns on weekdays. Theplots also show less variability in sleep patterns during theweekday.These observations provide several interesting insights.For example, most users in both groups maintain at least 7hours of sleep on weekdays. Outliers (also irregular sleep-ers) who clocked less than 2.5 hours of sleep on weekdaysgenerally make the worrisome cases as the lack of sleep willlikely affect their class performance the next day. Second,25% of users with regular sleep patterns would clock at least7 hours of sleep on weekdays and weekends. In contrast,most students in this category would sleep for a minimumof 5 hours on weekends. Irregular sleepers who lie withinthe first quartile are more likely not to be getting enoughrest, especially when the recuperation period is crucial forthe weekend.
Next, we illustrate
WiSleep ’s ability to perform sleep analyt-ics for individual on-campus student users over the courseof a semester. We randomly selected a subset of our userstudy participants and retrieved their WiFi events for ap-proximately 70 days from the start of the semester till thesemester-end. Note that we intentionally left out the firstthree weeks, as students were more likely to take this timestill to settle into their student accommodation.
Figure 16: How do sleep patterns change over a semes-ter? Predicted sleep duration for two participants, P6and P7, over the semester. , Mammen et al.
Figure 16 illustrates the predicted sleep duration, averagedevery three days for two anonymous users, P6 and P7. Onthe whole, both users display sleep inconsistencies through-out the semester. However, P7’s sleep patterns seem fairlyconsistent at the start of the semester and showed high vari-ability as they transitioned to mid-term week (20/10/2019 -03/11/2019). The same observation can be made for P6, whoselargest dip also occurred on 28/11/2019. It is important tonote that for P6, the two lowest points (of 3-hours sleep)were attributed to missing data. For example, P6 was notdetected to be in the primary residential location for fourdays between 29/10/2019 to 3/11/2019, resulting in a lowaverage for sleep duration.
Figure 17: Inferred sleep duration for 2 participants,P6 and P7 between weekends and weekdays, over thesemester.
Figure 17 illustrates sleep regularity for users on weekdaysand weekends each week, over the semester. P7 generallygets about 8 hours of sleep on average for weekdays (avg =8 hours 48 minutes, std = 1 hour 33 minutes) and weekends(avg = 8 hours 42 minutes, std = 1 hour 37 minutes). On theother hand, P6 tends to sleep longer on weekends (avg = 10hours 44 minutes, std = 1 hour 50 minutes) than on weekdays(avg = 9 hours 10 minutes, std = 2 hours 37 minutes). WhileP6 appears to get more sleep overall, we note that P6’s sleepduration decreased much more (by one standard deviation)in Week 8 and Week 14, denoting mid-term and final examweeks (Week 12 corresponds with Thanksgiving recess).Distinguishing between a user who is present on campusbut not getting sleep and a user absent from campus makes akey heuristic for a practical application of sleep interventionusing our monitoring system. Since P6 was not detected oncampus, the sharp dip (3 hours sleep) in our results shouldnot sound unwanted alarms for intervention. In this example,our results suggest that both participants tend to recoverfrom sleep loss – sleeping at least ten hours on average afteronly sleeping for ≈ We have addressed the challenges of predicting sleep dura-tion and providing analytics for population and individualuse. Here, we discuss the implication of
WiSleep and its limi-tations.
A key extension is detecting polyphasic sleep. Prior researchsuggests people who generally sleep < > < 𝑇 𝑠𝑙𝑒𝑒𝑝 − )or after the wake-up time ( 𝑇 𝑎𝑤𝑎𝑘𝑒 + ). Our model could useonly a uniform prior (see Section 4.2) to find the longestinactive period at these two times. For instance, in Figure11, if both the inactive periods had occurred in a residentialbuilding, the first or second inactive period could be classifiedas ‘secondary sleep.’ This warrants further investigation.As in Section 7.3, WiSleep handles false positives in severalways. In our campus implementation, sleep duration is onlypredicted for time spent in residential areas. Whereas ina home implementation, a user’s absence is identified byobserving ping-pong events from the user’s primary device.
Indeed, poor sleep hygiene has major health consequencesand is a public health issue [2, 15, 32].
WiSleep can renderactionable insights from its aggregated population-scale an-alytics. our system can play a key role in responding to thecall-for-action to advance sleep disorder problems (e.g., pro-vide “open-access” data sources for public health researchers[30]). Our privacy-preserving approach will ensure that trustand confidence can be upheld in data-sharing practices forpublic health practices [31].
First, our approach assumes that device event data is avail-able on a longitudinal basis for daily sleep monitoring. How-ever, data for students may be absent from our logs for nu-merous reasons. Data unavailability will disrupt
WiSleep from its daily monitoring. Our approach is also not free fromperiodic maintenance to ensure user devices are valid (i.e.,users did not change their phones) and maintain currentresidents (e.g., students may no longer reside in dorms). Fi-nally, unlike prior work [29, 33], our approach is limited topredicting sleep duration. One key measure is sleep qual-ity , which requires physiological indicators directly sensedfrom the user’s body (i.e., wearable). There is no workaroundto this limitation, but our focus is on analyzing aggregatedsleep trends at a large-scale, to which coarse-grained WiFiinformation is more than adequate. iSleep: Scalable Sleep Monitoring and Analytics Using Passive WiFi Sensing , ,
10 CONCLUSIONS
In this paper, we presented
WiSleep , a network-based sys-tem to detect sleep periods by passively observing the net-work activity of a user’s phone and provide aggregated andindividual-level analytics to accommodate for public andpersonal use. We presented an ensemble-based Bayesianinference technique to infer sleep from coarse-grain WiFiassociation and disassociation events. We validated our ap-proach using 16 users either living on-campus dormitories orprivate home in our study, and showed that it outperformsthe state-of-the-art methods for users with irregular sleeppatterns while yielding comparable accuracy (79.5% on av-erage) for normal users. Further, we showed that
WiSleep can process the data of 20k users on a single commoditymachine, allowing it to scale to large campuses with lowserver requirements. Our large scale case study revealed sev-eral interesting insights for population-scale and individualsleep analytics. As future work, we plan to combine our sleepmodels with stress detection methods to develop a completestudent well-being service.
REFERENCES [1] Saeed Abdullah, Mark Matthews, Elizabeth L. Murnane, Geri Gay, andTanzeem Choudhury. 2014. Towards Circadian Computing: Early toBed and Early to Rise Makes Some of Us Unhealthy and Sleep Deprived.In
Proc. 2014 ACM International Joint Conference on Pervasive andUbiquitous Computing .[2] Bruce M Altevogt, Harvey R Colten, et al. 2006.
Sleep disorders andsleep deprivation: an unmet public health problem . National AcademiesPress.[3] Sonia Ancoli-Israel and Jennifer L Martin. 2006. Insomnia and daytimenapping in older adults.
Journal of Clinical Sleep Medicine
Journalof American college health
50, 3 (2001), 131–135.[8] Siddhartha Chib and Edward Greenberg. 1995. Understanding themetropolis-hastings algorithm.
The american statistician
49, 4 (1995),327–335.[9] Andrea Cuttone, Per Bækgaard, Vedran Sekara, Håkan Jonsson,Jakob Eg Larsen, and Sune Lehmann. 2017. Sensiblesleep: A bayesianmodel for learning sleep patterns from smartphone events.
PloS one
12, 1 (2017), e0169901.[10] A. Cuttone, P. Bakgaard, V. Sekara, H. Jonsson, JE Larsen, and S.Lehmann. 2017. SensibleSleep: A Bayesian Model for Learning SleepPatterns from Smartphone Events.
PLoS ONE
12, 1 (2017).[11] Yassine El-Khadiri, Gabriel Corona, Cédric Rose, and François Charpil-let. 2018. Sleep Activity Recognition using Binary Motion Sensors. In
MMWR.Morbidity and mortality weekly report
58, 42 (2009), 1175.[14] Tiago M. Fragoso, Wesley Bertoli, and Francisco Louzada. 2017.Bayesian Model Averaging: A Systematic Review and ConceptualClassification.
International Statistical Review
86, 1 (Dec 2017), 1–28.[15] Adolescent Sleep Working Group et al. 2014. School start times foradolescents.
Pediatrics
IEEE Transactions on Mobile Computing
15, 6 (2015), 1514–1527.[17] Tian Hao, Guoliang Xing, and Gang Zhou. 2013. iSleep: unobtrusivesleep quality monitoring using smartphones. In
Proceedings of the 11thACM Conference on Embedded Networked Sensor Systems . 1–14.[18] Hande Hong, Chengwen Luo, and Mun Choon Chan. 2016. Socialprobe:Understanding social interaction through passive wifi monitoring. In
Proceedings of the 13th international conference on mobile and Ubiquitoussystems: Computing, networking and services . 94–103.[19] Chen-Yu Hsu, Aayush Ahuja, Shichao Yue, Rumen Hristov, ZacharyKabelac, and Dina Katabi. 2017. Zero-Effort In-Home Sleep and In-somnia Monitoring using Radio Signals. In
Proc. ACM Interact. Mob.Wearable Ubiquitous Technology .[20] Dheryta Jaisinghani, Rajesh Krishna Balan, Vinayak Naik, ArchanMisra, and Youngki Lee. 2018. Experiences & Challenges with Server-Side WiFi Indoor Localization Using Existing Infrastructure. In
Proceed-ings of the 15th EAI International Conference on Mobile and UbiquitousSystems: Computing, Networking and Services (New York, NY, USA) (MobiQuitous ’18) . Association for Computing Machinery, New York,NY, USA, 226–235. https://doi.org/10.1145/3286978.3286989[21] Eftychia Kalogianni, R Sileryte, Marco Lam, Kaixuan Zhou, MartijnVan der Ham, S Van der Spek, and E Verbree. 2015. Passive wifimonitoring of the rhythm of the campus. In
Proceedings of The 18thAGILE International Conference on Geographic Information Science . 9–14.[22] Yassine El Khadiri, Gabriel Corona, Ceedric Rose, and Francois Charpil-let. 2018. Sleep Activity Recognition using Binary Motion Sensors. In
Proc. 30th IEEE Conference on Tools with Artificial Intelligence .[23] Usman Mahmood Khan, Zain Kabir, Syed Ali Hassan, and Syed HassanAhmed. 2017. A deep learning framework using passive WiFi sens-ing for respiration monitoring. In
GLOBECOM 2017-2017 IEEE GlobalCommunications Conference . IEEE, 1–6.[24] Patrick M Krueger and Elliot M Friedman. 2009. Sleep duration inthe United States: a cross-sectional population-based study.
Americanjournal of epidemiology
Proceedings of the 16th ACM International Symposium onMobile Ad Hoc Networking and Computing . ACM, 267–276.[26] Xianchen Liu, Makoto Uchiyama, Keiko Kim, Masako Okawa, KayoShibui, Yoshihisa Kudo, Yuriko Doi, Masumi Minowa, and Ryuji Ogi-hara. 2000. Sleep loss and daytime sleepiness in the general adultpopulation of Japan.
Psychiatry research
93, 1 (2000), 1–11.[27] Ganpat Maheshwari and Faizan Shaukat. 2019. Impact of poor sleepquality on the academic performance of medical students.
Cureus
Proceedings of the SIGCHI conference on , Mammen et al. human factors in computing systems . 477–486.[29] Anh Nguyen, Raghda Alqurashi, Zohreh Raghebi, Farnoush BanaeiKashani, Ann C. Halbower, and Tam Vu. 2016. A Lightweight and Inex-pensive In-ear Sensing System For Automatic Whole-night Sleep StageMonitoring. In
Proceedings of the 14th ACM Conference on EmbeddedNetwork Sensor Systems (SenSys) . ACM, 230–244.[30] Klara K Papp, Carolyn E Penrod, and Kingman P Strohl. 2002. Knowl-edge and attitudes of primary care physicians toward sleep and sleepdisorders.
Sleep and Breathing
6, 3 (2002), 103–109.[31] Michael Parker and Susan Bull. 2015. Sharing public health researchdata: toward the development of ethical data-sharing practice in low-and middle-income settings.
Journal of Empirical Research on HumanResearch Ethics
10, 3 (2015), 217–224.[32] Geraldine S Perry, Susheel P Patil, and Letitia R Presley-Cantrell. 2013.Raising awareness of sleep as a healthy behavior.
Preventing chronicdisease
10 (2013).[33] Tauhidur Rahman, Alexander T Adams, Ruth Vinisha Ravichandran,Mi Zhang, Shwetak N Patel, Julie A Kientz, and Tanzeem Choudhury.2015. Dopplesleep: A contactless unobtrusive sleep sensing systemusing short-range doppler radar. In
Proceedings of the 2015 ACM In-ternational Joint Conference on Pervasive and Ubiquitous Computing .39–50.[34] Haroon Rashid, Pushpendra Singh, and Krithi Ramamritham. 2017.Revisiting selection of residential consumers for demand responseprograms. In
Proceedings of the 4th ACM International Conference onSystems for Energy-Efficient Built Environments . 1–4.[35] Yanzhi Ren, Chen Wang, Jie Yang, and Yingying Chen. 2015. Fine-grained sleep monitoring: Hearing your breathing with smartphones.In
Computer Communications (INFOCOM), 2015 IEEE Conference on .IEEE, 1194–1202.[36] Mark R Rosekind, Kevin B Gregory, Melissa M Mallis, Summer L Brandt,Brian Seal, and Debra Lerner. 2010. The cost of poor sleep: workplaceproductivity loss and associated costs.
Journal of Occupational andEnvironmental Medicine
52, 1 (2010), 91–98.[37] Warren R Ruehland, Fergal J O’Donoghue, Robert J Pierce, Andrew TThornton, Parmjit Singh, Janet M Copland, Bronwyn Stevens, andPeter D Rochford. 2011. The 2007 AASM recommendations for EEGelectrode placement in polysomnography: impact on sleep and corticalarousal scoring.
Sleep
34, 1 (2011), 73–81.[38] Joel Sommers and Paul Barford. 2012. Cell vs. WiFi: on the performanceof metro area mobile connections. In
Proceedings of the 2012 internetmeasurement conference . 301–314.[39] Sara Thomée, Annika Härenstam, and Mats Hagberg. 2011. Mobilephone use and stress, sleep disturbances, and symptoms of depressionamong young adults-a prospective cohort study.
BMC public health
11, 1 (2011), 66.[40] Karen Vail-Smith, W Michael Felts, and Craig Becker. 2009. Relation-ship between sleep quality and health risk behaviors in undergraduatecollege students.
College Student Journal
43, 3 (2009), 924–930.[41] Alexander JAM Van Deursen, Colin L Bolle, Sabrina M Hegner, andPiet AM Kommers. 2015. Modeling habitual and addictive smartphonebehavior: The role of smartphone usage types, emotional intelligence,social stress, self-regulation, age, and gender.
Computers in humanbehavior
45 (2015), 411–420.[42] Rui Wang, Weichen Wang, Alex DaSilva, Jeremy F Huckins, William MKelley, Todd F Heatherton, and Andrew T Campbell. 2018. Trackingdepression dynamics in college students using mobile phone and wear-able sensing.
Proceedings of the ACM on Interactive, Mobile, Wearableand Ubiquitous Technologies
2, 1 (2018), 1–26.[43] Shweta Ware, Chaoqun Yue, Reynaldo Morillo, Jin Lu, Chao Shang,Jayesh Kamath, Athanasios Bamis, Jinbo Bi, Alexander Russell, and Bing Wang. 2018. Large-scale automatic depression screening us-ing meta-data from wifi infrastructure.
Proceedings of the ACM onInteractive, Mobile, Wearable and Ubiquitous Technologies
2, 4 (2018),1–27.[44] Sumio Watanabe. 2013. A Widely Applicable Bayesian InformationCriterion.
Journal of Machine Learning Research
14 (2013).[45] Camellia Zakaria, Rajesh Balan, and Youngki Lee. 2019. StressMon:Scalable Detection of Perceived Stress and Depression Using PassiveSensing of Changes in Work Routines and Group Interactions.
Pro-ceedings of the ACM on Human-Computer Interaction
3, CSCW (2019),1–29.[46] Chen Zhenyu, Nicholas Lane, Guiseppe Cardone, Mu Lin, TanzeemChoudhury, and Andrew Campbell. 2013. Unobtrusive Sleep Monitor-ing Using Smartphones. In