Context-Dependent Implicit Authentication for Wearable Device User
CContext-Dependent Implicit Authentication forWearable Device Users
William Cheung and Sudip VhaduriFordham University, Bronx, NY 10458, USA { wcheung5, svhaduri } @fordham.edu Abstract —As market wearables are becoming popular witha range of services, including making financial transactions,accessing cars, etc. that they provide based on various privateinformation of a user, security of this information is becomingvery important. However, users are often flooded with PINs andpasswords in this internet of things (IoT) world. Additionally,hard-biometric, such as facial or finger recognition, based au-thentications are not adaptable for market wearables due totheir limited sensing and computation capabilities. Therefore,it is a time demand to develop a burden-free implicit au-thentication mechanism for wearables using the less-informativesoft-biometric data that are easily obtainable from the marketwearables. In this work, we present a context-dependent soft-biometric-based wearable authentication system utilizing theheart rate, gait, and breathing audio signals. From our detailedanalysis, we find that a binary support vector machine (SVM)with radial basis function (RBF) kernel can achieve an averageaccuracy of . ± . , F score of . ± . , an equal errorrate (EER) of about . at a lower confidence threshold of 0.52,which shows the promise of this work. Index Terms —wearable authentication, biometrics, implicitauthentication
I. I
NTRODUCTION
A. Motivation
The interconnected nature of Internet of Things (IoT)have allowed us to remotely collect information or controlmultitude of physical objects. Along with the growth ofIoT, advancements in smartphones and wearables in theirsensing and computational capabilities to a point which enablemany new applications and usage scenarios to emerge [1]–[5]. Even with this progress, wearables are still growing inpopularity with the arrival of new applications. Some includethe ability to identify a user to third party services [6], protectcommercial customer information (i.e., passwords, credit cardinformation) [7], manage financial payments, allows accessto smartphones and other paired devices, unlock vehicles [7],monitor or track individuals (e.g., child or elderly monitoringor fall detection), and assess an individual’s health and fitness.According to a recent market report, a 72.7% increase inwearable shipments and an associated increase in sales revenueof 78.1% are predicted from 2016 to 2022 [8].However, wearables also raise new challenges, especially interms of security. The main accuracy and reliability concern isthat imposters with unauthorized access can steal informationfrom other sensitive IoT objects, which poses a significantrisk [9]. Furthermore, an intentional device sharing between target and non-target users might lead to inaccurate and faultyassessment since healthcare providers and researchers areincreasingly relying on wearables to monitor their patients orstudy participants remotely. Therefore, there is an imperativeneed for a robust and accurate authentication mechanismspecifically for wearable device users.Existing wearable devices either have no authentication sys-tems or authentication mechanisms that are often knowledge-based regular PIN locks or pattern locks [7] [10], whichsuffer from scalability issues [11]. Additionally, many times,users opt to completely disable security mechanisms out ofconvenience, as the design hinders the implementation ofsecurity itself.
B. Related Work1) Wearable Constraints:
Wearable device user authentica-tion is a relatively new field of research compared to othermobile authentication [6], [11]–[13]. The limited display sizesof wearables add another constraint that limits the choicesof authentication mechanisms [6], [14]. But as technologyadvances companies such as Samsung, Fit-bit, Apple, Garmin,and Embrace can provide lower level granularity in data.More biometrics are available as more sensors are beingadded such as microphone, electrocardiograms (ECG), andGPS but there still hold accuracy concerns. Researchers havefound that, although for people over the age of 85 Appleaccurately detects atrial fibrillation at a rate of 96%, forpeople under 55 it only correctly diagnoses atrial fibrillation19.6% of the time [15]. Another group of researchers [16]developed designed wrist strapped ECG reader and developedan authentication system with an accuracy of 93.5%, which islimited by the ease of use and the need for user movement.Therefore, an authentication scheme that can utilize data froma multitude of readily available sensors on market wearablescould be more realistic to develop a non-stop implicit wearabledevice user authentication system.
2) Multi-modal Biometric Authentication:
In previouswork, combinations of biometrics were used to form multi-modal biometric authentication systems for increased reliabil-ity compared to unimodal systems, which often suffer fromnoisy data, intra-class variations, inter-class similarities, andspoof attacks [17]. For multi-modal authentication systems,researchers have utilized different hard- and soft-biometrics. a r X i v : . [ c s . H C ] A ug owever, due to relatively low computational power ofwearables, these multi-modal approaches are typically notimplemented for implicit and continuous authentication onstate-of-the-art wearables.
3) Wearable Authentication:
Researchers recently proposedauthentication techniques that are more suitable for wearables,focusing more on approaches based on behavioral biometrics ,such as gait [18]–[20], activity types [6], [9], gesture [21],keystroke dynamics [22] and physiological biometrics , suchas PPG signals [23]. Almost all of these studies are based onproject specific generated datasets. While other projects haveaddressed some of the limitations of gait-based approaches byconsidering different types of gestures [21] or activities [6],[9]. All of these models are based on movement and thereby,fail to work in the very common human state of being seden-tary [14], [22]. Authentication approaches using physiologicalbiometric data, such as heart rate and bioimpedance [13]require fine-grained samples and sensor readings are easilyaffected by noise, motion, etc but are constantly available.Depending on a user’s context, i.e., phsysical state and avail-ability of biometrics it is possible to build a robust multi-modalauthentication process, which is able to continue changingcontexts.
C. Contributions
The main contribution of this paper is the exploration ofa hierarchical non-stop implicit authentication for wearablesusing less informative coarse-grained soft-biometrics. Com-pared to previous work [14], [24], where we use hybrid-biometrics, such as calorie burn that can be affected by auser’s self-reported input, e.g., age, height, and weight, inthis work we focus on three different soft-biometrics, i.e.,heart rate, gait, and breathing that can be measured without auser’s self-reported input and can be easily obtained from themarket wearables. In this work, we present a multi-biometric-based hierarchical context-driven approach (discussed in Sec-tion II-E) that works both in sedentary and non-sedentary periods. We develop both binary and unary models basedon the availability of other people’s data in addition to avalid user’s data. We are able to authenticate a user with anaverage accuracy of . ± . & F = 0 . ± . ( non-sedentary , Table II) and an average accuracy . ± . & F = 0 . ± . ( sedentary , Table III), while developingthe binary SVM models (Section III-D). While developing theunary models, we obtain an average accuracy of . ± . and F = 0 . ± . (Sections III-D).II. A PPROACH
In this paper, we intend to demonstrate the importance andeffectiveness of different biometrics to identify wearable de-vice users with the help of different machine learning models.Before we describe the detailed analysis, we first introducethe datasets, pre-processing steps, feature engineering, andmethods used in this work.
A. Datasets
In this work, we use the following three different datasets. • Fitbit dataset: We use the heart rate data collected at arate of one sample per minute using the Fitbit Charge HRdevice from 10 subjects similar to our previous work [14],[24]–[32], [32]–[38]. • Gait dataset: We use the WISDM dataset [39], wheregyroscope and accelerometer readings were collected ata rate of one sample in 50 milliseconds using the LG GWatch (running Wear 1.5 operating system). In this work,we use 10 subjects’ data. • Audio dataset: We collect breathing audio clips from 10subjects with six distinct inhalation breathing events perclip using the Evistr digital voice recorder. The clips arearound 5 seconds long.
B. Data Pre-Processing
Since we are using a real-world datasets, we first need toclean the dataset before using it. Then, we need to segmentthe continuous stream of biometrics, such as heart rate, gaitinformation, and desired audio events (i.e., breathing). Finally,we compute and select influential features before constructingauthentication models.
1) Data Segmentation:
Since heart rate and gait data weresampled at different frequencies, therefore, we segment theheart rate and gait samples into 10-sample windows obtainstable and rich information. Using a 50% sliding window, weobtain 800 heart rate windows and 720 gait windows, i.e.,instances from each subject. Unlike the heart rate or gait data,the audio data comes with other types of sounds in additionto desired breathing sounds. Additionally, some clips comewith multiple breathing events separated by silence or noisyparts. Therefore, we segment the audio clips to fetch singleinhalation breathing events. Thereby, we obtain around sixinhalation breathing events per subject. Each event is modifiedin 102 ways mentioned in the next section (Section II-B2),we obtain a total of 612 instances from each subject. Whileutilizing the three different biometrics to develop differentmodels discussed in the Methods section, i.e., Section II-E,we consider the same 612 instances from each biometric.
2) Audio Data Augmentation:
Breathing audio could be al-tered due to a change in contexts, e.g., environments, physicalstate, or mood. To simulate this and capture the variations,we augment the original audio breathing events using variouspitch shifts and speed changes. • Pitch shift: We consider 15 different pitch shifts rangingfrom -3.5 to 3.5 with 0.5 increments • Speed change: We consider seven speed changes rangingfrom .25x to 2x times the speed of an original clip with anincrement of .25x, skipping 1x since that would representthe original clip, which we have already included as apitch shift with value . • Noise Superposition: We consider 10 randomly pickedvacuum and washing machine sound clips, obtained fromhe environmental sound classification dataset [40], asbackground noises to modify original breathing eventclips with eight different signal-to-noise ratio levels rang-ing from − to , incremented by magnitudes of 10while skipping .Thereby, each original breathing clip is modified 102 times. C. Feature Computation
We compute the following sets of candidate features. • Heart rate features: From the windows of 10 samples wecompute 21 statistical features described in our previouswork [24]. • Gait features: We compute the same above mentioned 21features from each window of x-, y-, and z-axis readingsobtained from both gyroscope and accelerometer. • Audio features: From each inhalation breathing event(original and augmented), we compute 40 Mel-frequencycepstral coefficients (MFCCs).Thereby, we obtain 21 and 126 (21 from each of the six axes)features from a single window of heart rate and gait data,respectively, and 40 features from every breathing clip.
D. Feature Selection
To select the most influential features, we use the Sci-kitlearn feature selection package “Select the K Best Features”(SelectKBest), which provides an importance score for eachfeature and based on that score we rank the features. We trywith different numbers of features, i.e., K , to find the bestmodel performance. In this work, we find K = 20 performsthe best. In each iteration of the leave-one-out validation,described in Section III-A, we select different feature sets,which are very similar with changes in ordering. E. Methods
In Figure 1, we present an overview of our proposed implicitand continuous wearable-user authentication scheme usingperson-dependent multiple biometrics that are readily availableon most of the wearables in the market. Depending on a user’scontext, i.e., user’s state, various routes of the authenticationscheme will be executed.We first try to authenticate a user based on the heartrate obtained from the photo-plethysmogram (PPG) sensorssince this biometric data is always available irrespective of auser’s state. However, the heart rate data may not be precisedto identify the user when it is recorded in coarse-grained,e.g., one samples per minute. Additionally, the heart ratebiometric can be easily affected by different factors, suchas motion artifacts or stress. Therefore, if the system cannotauthenticate the user with enough confidence, it checks thenext authentication module that relies on other biometrics.The authentication system first tries to check whether theuser is moving utilizing the on-device accelerometer andgyroscope data. If the user is moving, the system tries to authenticate the user based on gait and heart rate biometricstogether. If the system can authenticate the user with enoughconfidence, it allows the user to access the device.However, if the user does not move or the gait and heartrate-based module cannot authenticate the user, the systemtries to combine breathing biometrics collected from the on-device microphone. During sedentary states, audio recordingsfrom wearables are less affected by motion artifacts. Thereby,the breathing audio recordings could be a good biometric toidentify users, while during sedentary states. If the system canauthenticate the user with enough confidence, it allows the userto access the device. Otherwise, the user’s access to the deviceis revoked and require some sort of external verification, suchas pin locks or passwords.Based on the combination of the three biometrics that weuse in our authentication approach, we define the followingmodels: • Heart rate data-driven model (HR model) • Heart rate and gait data-driven model (HRG model) • Heart rate and breathing data-driven model (HRB model)While developing the above models, we consider variousclassifiers, including the k -nearest neighbor ( k -NN), randomforest (RF), naive bayes (NB), and support vector machine(SVM) with binary and unary schemes. Compared to binary,unary models are available only for the SVM classifiers withradial basis function (RBF) kernels.Based on the windowing approach discussed in Sec-tion II-B1, we can derive the time complexity of our authen-tication system based on the sampling frequency of differentsensors. For example, let us consider a case where every heartrate sample is collected in x seconds. With this samplingfrequency, it will take x seconds to make a window of 10heart rate samples. Therefore, if the HR model can success-Fig. 1: Proposed wearable device user authentication schemeABLE I: The best HR models with average and standard deviation of performance measures BINARY ModelClassifier (parameters) feature count ACC RMSE FAR FRR F score AUC-ROCRF (n estimators = 450) 20 0.64 (0.12) 0.04 (0.01) 0.30 (0.15) 0.42 (0.16) 0.61 (0.14) 0.64 (0.12) k -NN ( k = 32 , minkowski distance) 20 0.63 (0.11) 0.04 (0.01) 0.37 (0.15) 0.36 (0.14) 0.63 (0.12) 0.63 (0.11)NB 20 0.65 (0.11) 0.04 (0.01) 0.36 (0.25) 0.39 (0.19) 0.61 (0.12) 0.63 (0.11)SVM (RBF kernel, γ = 0 . , C = 3 ) 20 0.66 (0.11) 0.04 (0.01) 0.29 (0.16) 0.38 (0.17) 0.63 (0.14) 0.66 (0.11)SVM (Poly. kernel, d = 1 , C = 1 ) 20 0.65 (0.12) 0.04 (0.01) 0.26 (0.20) 0.44 (0.23) 0.59 (0.18) 0.65 (0.12)UNARY ModelSVM (RBF kernel, γ = 0 . , ν = 0 . ) 20 0.56 (0.08) 0.05 (0.00) 0.41 (0.14) 0.46 (0.09) 0.55 (0.08) N/A TABLE II: The best HRG models with average and standard deviation of performance measures
BINARY ModelClassifier (parameters) feature count ACC RMSE FAR FRR F score AUC-ROCRF (n estimators = 450) 20 0.69 (0.13) 0.04 (0.01) 0.47 (0.32) 0.15 (0.21) 0.73 (0.21) 0.71 (0.13) k -NN ( k = 24 , minkowski distance) 20 0.79 (0.07) 0.03 (0.01) 0.19 (0.10) 0.23 (0.09) 0.79 (0.08) 0.79 (0.07)NB 20 0.65 (0.10) 0.04 (0.01) 0.28 (0.26) 0.42 (0.27) 0.62 (0.20) 0.66 (0.10)SVM (RBF kernel, γ = 0 . , C = 5 ) 20 0.82 (0.08) 0.03 (0.01) 0.17 (0.09) 0.19 (0.10) 0.81 (0.08) 0.82 (0.08)SVM (Poly. kernel, d = 3 , C = 14 ) 20 0.78 (0.09) 0.03 (0.01) 0.19 (0.12) 0.25 (0.13) 0.77 (0.10) 0.78 (0.09)UNARY ModelSVM (RBF kernel, γ = 0 . , ν = 0 . ) 20 0.72 (0.10) 0.04 (0.01) 0.28 (0.16) 0.29 (0.08) 0.72 (0.09) N/A TABLE III: The best HRB models with average and standard deviation of performance measures
BINARY ModelClassifier (parameters) feature count ACC RMSE FAR FRR F score AUC-ROCRF (n estimators = 600) 20 0.90 (0.07) 0.02 (0.01) 0.13 (0.10) 0.07 (0.08) 0.90 (0.07) 0.90 (0.07) k -NN ( k = 2 , minkowski distance) 20 0.92 (0.07) 0.02 (0.01) 0.08 (0.07) 0.09 (0.11) 0.91 (0.09) 0.92 (0.07)NB 20 0.75 (0.05) 0.04 (0.00) 0.22 (0.10) 0.29 (0.12) 0.73 (0.07) 0.75 (0.05)SVM (RBF kernel, γ = 0 . , C = 4 ) 20 0.94 (0.07) 0.02 (0.01) 0.06 (0.07) 0.07 (0.09) 0.93 (0.08) 0.94 (0.07)SVM (Poly. kernel, d = 4 , C = 16 ) 20 0.91 (0.07) 0.02 (0.01) 0.06 (0.06) 0.11 (0.08) 0.91 (0.07) 0.91 (0.07)UNARY ModelSVM (RBF kernel, γ = 0 . , ν = 0 . ) 20 0.72 (0.07) 0.04 (0.00) 0.32 (0.10) 0.24 (0.06) 0.73 (0.06) N/A fully validate a user, it will take x seconds to complete theauthentication process. However, if the HR models to validatea user, the system can take one of the two paths based on theuser’s context. If the user is in a non-sedentary state, then theHRG model will be triggered, which will wait for an additional x seconds to collect 10 gait and heart rate samples; thereby,a total of x seconds will be required for the system tovalidate the user. If the HR model fails to validate the userand the user’s context, i.e., physical state is sedentary , thenthe system will try to authenticate the user based on breathingevents in addition to heart rate. Since the average length of abreathing events used in this work is 1.4 seconds; therefore, if x > = 0 . seconds (i.e., a single heart rate window is longerthan the breathing event), the system will need x secondsto gather 10 new heart rate samples for the HRB model totest the user. Thereby, it will take in total x seconds for thesystem to authenticate the user. However, if x < . seconds,i.e., breathing events are larger than the heart rate windows,the system will take x + 1 . seconds to validate the user.III. U SER A UTHENTICATION
Before presenting the detailed evaluation of our models,we first present training-testing set split and our modelingschemes, followed by list of performance measures and hyper-parameter optimization.
A. Training-Testing Set
In our binary modeling, we try to distinguish a valid user(class- ) from the impostors (class- ). To avoid overfitting,we consider at least 10 times more feature windows, i.e.,number of instances than the number of features. Whiletraining-testing, we follow the leave-one-out strategy, wherewe train-test N unique models one-by-one for each userwith N number of instances. During each training-testing,we keep one instance for testing and use the rest of the N − instances for training. Since we have 10 subjects andperform 10 leave-one-out testing for each subject; thereby, allaggregated performance measures presented in this paper arebased on 60 performance measures.For class balancing, in case of binary models, we considerthe same N − number of instances from each class. Sinceour imposter class (class- ) consists of nine person data (i.e.,all subjects except the one considered as valid subject orclass- ), we pick ( N − / instances from each imposter.For example, while training a HR model, we consider 510heart rate windows from a target/valid user and 510/9 ≈ ≈
11 windows from each imposter. Similarly, while traininga HRB model, we use 510 windows, i.e., breathing eventsrom a valid user in addition to 510 heart rate windows.Where, 510 breathing events are obtained from the five originalbreathing events and their 102 augmented events, i.e., × = 510. To keep the training and test set separate, to use theremaining one breathing event and its 102 augmented events,i.e., 102 events/windows. For imposter, we uniformly selectthe windows to ensure a balanced classification. In case ofunary models, we also follow the leave-one-out strategy. But,compared to the binary, unary models are developed with onlya valid user’s data with an outlier rate ( ν ) to split the user’sdata into valid and outlier groups. In case of our experiments,we find ν = 0 . as the optional outlier rate. B. Performance Measures
To evaluate the performance of different modelingapproaches, we consider the following measures:
Accuracy (ACC) , which is the fraction of predictions thatare correct, i.e.,
ACC = T P + T NT P + F N + F P + T N (1)
Root Mean Square Error (RMSE) , which is the square rootof the sum of squares of the deviation from the prediction to (a)(b)
Fig. 2: Box plots of (a) positive and (b) negative measuresof performance of the HRB model with Binary SVM RBFclassifier. Cross markers ( × ) represent the average values. the actual value. It is equivalent to the square root of the rateof misclassification, i.e., RM SE = (cid:114) F P + F NT P + F N + F P + T N (2)
False Acceptance Rate (FAR) , which is the fraction ofinvalid users accepted by an authentication system, i.e.:
F AR = F PF P + T N (3)
False Rejection Rate (FRR) , which is the fraction ofgenuine users rejected by an authentication system, i.e.:
F RR = F NT P + F N (4) F Score , which is the measure of performance of anauthentication system based on both it precision (positivepredictive value) and recall (true positive rate) measures, i.e.: F Score = 2 (cid:18)
T PT P + F N + T PT P + F P (cid:19) − (5) Area Under the Curve - Receiver Operating Characteristic(AUC-ROC) , which is the graphical relationship between FARand FRR with the change of thresholds. Where terminologiesused in Equations 1, 2, 3, 4, and 5 have their usual meaning inmachine learning, when classifying a subject using a featureset. Therefore, a desirable authentication system should havelower negative measures (i.e., RMSE, FAR, and FRR), buthigher positive measures (i.e., ACC, F Score, and AUC-ROC)of performance. We also use
Equal Error Rate (EER) , whichis defined as the point when FRR and FAR are equal, i.e., atrade-off between the two error measures (i.e., FRR and FAR)
C. Hyper-Parameter Optimization
We use the grid search package in the Sci-kit learn tofind the optimal hyper-parameter sets. For each leave-one-out modeling, we separately perform the hyper-parameteroptimization using various ranges of values. From the differentiterations of the leave-one-out approach, we obtain similarvalues for the hyper-parameters. In Tables I, II, and III,we present the set of optimal values obtained from differentmodeling approaches.
D. Authentication Model Evaluation
In Tables I, II, and III, we present the performance ofthe best models using various biometric combinations anddifferent classifiers. In Table I, we observe that the best binaryHR model (i.e., model that only uses heart rate data) canprovide an average ACC and AUC-ROC of . ± . . Asdiscussed previously in the Section II-E, if the HR model is notconfident enough to authenticate a user or fails to authenticate,we use additional biometrics, such as, gait or breathing sound.Compared to binary, for the unary HR model, we observe lowperformance, i.e., an average ACC of . ± . since Unarymodel considers portions of a valid user’s data as outliers.n Table II, we observe that adding gait biometric (whenavailable) with heart rate, all measures improve. In case ofthe best binary HRG model (i.e., model that uses heart rateand gait biometrics), ACC and AUC-ROC increased by 24%; F score increased by 29% compared to the best binary HRmodel. The FAR also improves (i.e., drops) from . ± . to . ± . . Though gait data is only available while auser is moving, its addition to less accurate minute-level heartrate data can significantly improve authentication performance.Similarly to binary, the unary HRG model shows promise overthe unary HR model with an overall increase of about 29%both for ACC and F score.In Table III, we observe that the HRB model (i.e., modelthat uses heart rate and breathing biometrics) achieves a betterperformance compared to the HRG model. We achieve a65% drop in the FAR while comparing the binary HRB withthe binary HRG model. Additionally, we observe ≈ F score, and AUC-ROCof the binary HRB model with the binary HRG model. Whilecomparing the HRB model to the HR model, we observe ahuge performance improvement. Compared to the binary HRmodel, the binary HRB model performs better in terms of F score (an increase of 48%) and AUC-ROC (an increase of42%) with a high accuracy of . ± . . The unary HRBmodel performs similar to the unary HRG model with a lowerstandard deviation, i.e., higher consistency, in terms of ACC( . vs. . ) and F score ( . vs. . ).In Figure 2, we present five summarized values of differentFig. 3: PDF and CDF with error bars of binary HRB SVM(RBF) model performance.Fig. 4: Change of error rates with varying confidence thresh-olds using the binary HRB SVM (RBF) model. performance measures in addition to the average valuespresented in Table III. In the figure, we observe that medianof each performance measure is better than average, sinceaverage is easily affected by outliers, which we do not showfor the simplicity of visualization. For example, we obtain2.6% better ACC, 2.8% better F score, 59% better FAR,and 89% better FRR, while comparing median with averagevalues. Additionally, we observe that the interquartile ranges ofdifferent performance measures are about 0.07 (Figure 2a) and0.05 (Figure 2b). Similarly, in the case of unary modeling, weobtain tighter interquartile ranges. These narrow interquartileranges represent the consistency of performance measures.In Figure 3, we present the Probability Distribution Function(PDF) and Cumulative Distribution Function (CDF) with errorbars of performance of the best binary model. In Figure 3,around 65% of the performance values (both ACC and F scores) fall in the range of 0.95–1, which shows that ourmodels perform very well for the most of the cases. In thecase of unary modeling, we observe that (i.e., ≈ E. Error Analysis
In this section, we present an analysis on how our systemperforms with the change of confidence levels, i.e., thresholds.In case of an ideal system, it is desired to have a lower FARand FRR. In Figures 4, we present our analysis of error rates(FAR and FRR) with varying confidence thresholds for thebinary HRB SVM (RBF) model. We observe that at confidencethreshold 0.52 FAR and FRR intersects with an equal errorrate (EER) of about 0.06. After this point, error rates dropquickly. We observe that FAR and FRR drops below 0.05 afterthreshold values around 0.6 and 0.7, respectively.IV. L
IMITATIONS , D
ISCUSSION , AND C ONCLUSIONS
To the best of our knowledge, this is the first work thatattempts to authenticate a wearable device user without anyexplicit user interaction utilizing three easily obtainable soft-biometrics (i.e., heart rate, gait, and breathing sounds) in amore context-based approach, i.e., availability of data. We canauthenticate a user with an average accuracy and AUC-ROC of . ± . , F score of . ± . , and an EER of about 0.06FAR at 0.52 confidence threshold while considering the heartrate and breathing sounds. This shows the promise to developa continuous implicit-authentication system for the marketwearables utilizing their limited sensing and computationalcapabilities.This work has some limitations, which we plan to addressin the future. First, we have limited number of audio breathingclips. However, we increase the data volume using standardudio augmentation approaches. Second, in this feasibilitywork, we use a set of ten subjects. However, we perform aleave-one-out validation approach and our achieved perfor-mance shows a promise to further investigate this with a large-scale extended period study. Third, we use different datasets,which could affect the performance. However, we use threeindependent biometrics and perform feature selection analysisto optimize implementation; thereby, our results potentiallyshows a baseline performance, which could further be im-proved by using the three biometrics from the same subjectsince that could more robustly identify a user compared toour case. Finally, more advanced modeling techniques suchas deep learning models (recurrent neural networks or convo-lutional neural networks) may further improve the accuracyof the models, but that will require to off load data from thewearable, which can lead to additional security challenges;therefore, our approach has a higher scope to implement onthe wearables. R EFERENCES[1] S. Vhaduri and T. Prioleau, “Adherence to personal health devices: Acase study in diabetes management,” in
EAI PervasiveHealth , 2020.[2] S. Vhaduri, “Nocturnal cough and snore detection using smartphones inpresence of multiple background-noises,” in
ACM COMPASS , 2020.[3] S. Vhaduri, T. Van Kessel, B. Ko, D. Wood, S. Wang, and T. Brun-schwiler, “Nocturnal cough and snore detection in noisy environmentsusing smartphone-microphones,” in . IEEE, 2019, pp. 1–7.[4] S. Vhaduri and T. Brunschwiler, “Towards automatic cough and snoredetection,” in . IEEE, 2019, pp. 1–1.[5] M. T. Al Amin, S. Barua, S. Vhaduri, and A. Rahman, “Load awarebroadcast in mobile ad hoc networks,” in . IEEE, 2009, pp. 1–5.[6] A. Bianchi and I. Oakley, “Wearable authentication: Trends and oppor-tunities,” it-Information Technology , vol. 58, no. 5, pp. 255–262, 2016.[7] T. Nguyen and N. Memon, “Smartwatches locking methods: A compar-ative study,” in
Symposium on Usable Privacy and Security , 2017.[8] “Forecasted value of the global wearable devices market,” Accessed:February 2018. [Online]. Available: https://goo.gl/C682Rv[9] Y. Zeng, A. Pande, J. Zhu et al. , “Wearia: Wearable device implicitauthentication based on activity information,” in
IEEE A World ofWireless, Mobile and Multimedia Networks (WoWMoM) , 2017.[10] M. Guerar, L. Verderame, A. Merlo, F. Palmieri, M. Migliardi, andL. Vallerini, “Circlepin: A novel authentication mechanism for smart-watches to prevent unauthorized access to iot devices,”
ACM Transac-tions on Cyber-Physical Systems , vol. 4, no. 3, pp. 1–19, 2020.[11] J. Unar, W. C. Seng, and A. Abbasi, “A review of biometric technologyalong with trends and prospects,”
Pattern recognition , vol. 47, no. 8, pp.2673–2688, 2014.[12] J. Blasco, T. M. Chen, J. Tapiador et al. , “A survey of wearable biometricrecognition systems,”
ACM Computing Surveys , vol. 49, no. 3, p. 43,2016.[13] C. Cornelius, J. Sorber, R. A. Peterson et al. , “Who wears me?bioimpedance as a passive biometric.” in
HealthSec , 2012.[14] S. Vhaduri and C. Poellabauer, “Wearable device user authenticationusing physiological and behavioral metrics,” in . IEEE, 2017, pp. 1–6.[15] “Apple watch ekg not as accurate for younger people, physician says,”Accessed: January 2020. [Online]. Available: shorturl.at/cnwS7[16] Z. Yan, Q. Song, R. Tan, Y. Li, and A. W. K. Kong, “Towards touch-to-access device authentication using induced body electric potentials,” arXiv preprint arXiv:1902.07057 , 2019. [17] M. Ghayoumi, “ a review of multimodal biometric systems: Fusionmethods and their applications,” in
IEEE/ACIS Computer and Informa-tion Science (ICIS) , 2015.[18] N. Al-Naffakh, N. Clarke, F. Li et al. , “Unobtrusive gait recognitionusing smartwatches,”
BIOSIG , 2017.[19] G. Cola, M. Avvenuti, F. Musso et al. , “Gait-based authenticationusing a wrist-worn device,” in
ACM Mobile and Ubiquitous Systems:Computing, Networking and Services , 2016.[20] A. H. Johnston and G. M. Weiss, “Smartwatch-based biometric gaitrecognition,” in
IEEE Biometrics Theory, Applications and Systems(BTAS) , 2015.[21] S. Davidson, D. Smith, C. Yang et al. , “Smartwatch user identificationas a means of authentication,”
Department of Computer Science andEngineering Std , 2016.[22] A. Acar, H. Aksu, A. S. Uluagac et al. , “Waca: Wearable-assistedcontinuous authentication,” arXiv preprint arXiv:1802.10417 , 2018.[23] N. Karimian, M. Tehranipoor, and D. Forte, “Non-fiducial ppg-basedauthentication for healthcare application,” in
IEEE Biomedical & HealthInformatics (BHI) , 2017.[24] S. Vhaduri and C. Poellabauer, “Multi-modal biometric-based implicitauthentication of wearable device users,”
IEEE Transactions on Infor-mation Forensics and Security , vol. 14, no. 12, pp. 3116–3125, 2019.[25] S. Vhaduri, A. Munch, and C. Poellabauer, “Assessing health trendsof college students using smartphones,” in . IEEE,2016, pp. 70–73.[26] S. Vhaduri and C. Poellabauer, “Cooperative discovery of personalplaces from location traces,” in . IEEE, 2016, pp.1–9.[27] S. Vhaduri, C. Poellabauer, A. Striegel, O. Lizardo, and D. Hachen,“Discovering places of interest using sensor data from smartphones andwearables,” in .IEEE, 2017, pp. 1–8.[28] S. Vhaduri and C. Poellabauer, “Towards reliable wearable-user identi-fication,” in . IEEE, 2017, pp. 329–329.[29] ——, “Hierarchical cooperative discovery of personal places fromlocation traces,”
IEEE Transactions on Mobile Computing , vol. 17, no. 8,pp. 1865–1878, 2018.[30] ——, “Biometric-based wearable user authentication during sedentaryand non-sedentary periods,” arXiv preprint arXiv:1811.07060 , 2018.[31] ——, “Impact of different pre-sleep phone use patterns on sleep qual-ity,” in . IEEE, 2018, pp. 94–97.[32] ——, “Opportunistic discovery of personal places using smartphoneand fitness tracker data,” in . IEEE, 2018, pp. 103–114.[33] ——, “Design and implementation of a remotely configurable andmanageable well-being study,” in
Smart City 360 . Springer, 2016,pp. 179–191.[34] ——, “Human factors in the design of longitudinal smartphone-basedwellness surveys,” in . IEEE, 2016, pp. 156–167.[35] ——, “Opportunistic discovery of personal places using multi-sourcesensor data,”
IEEE Transactions on Big Data , 2018.[36] ——, “Design factors of longitudinal smartphone-based health surveys,”
Journal of Healthcare Informatics Research , vol. 1, no. 1, pp. 1–40,2017.[37] C.-Y. Chen, S. Vhaduri, and C. Poellabauer, “Estimating sleep durationfrom temporal factors, daily activities, and smartphone use,” in
IEEECOMPSAC , 2020.[38] W. Cheung and S. Vhaduri, “Context-driven implicit authentication forwearable device users,” in