FaceOff: Detecting Face Touching with a Wrist-Worn Accelerometer
FFaceOff: Detecting Face Touching with a Wrist-WornAccelerometer
Xiang ‘Anthony’ Chen [email protected] HCI Research
Time (s)X (g)
Time (s)Y (g)
Time (s)Z (g)
Time (s)X (g)
Time (s)Y (g)
Time (s)Z (g)
FACE TOUCHING • Eye • Nose • Mouse • Hair • Forehead • Temple • Ear • Cheek • Chin
GENERAL ACTIVITIES* • Flipping magazines • Jumping jacks • Typing on a keyboard • Speaking with gestures • Washing hands
XZ Y * Chose based on [9, 12]
Figure 1: We report evidence that demonstrates the potentials and limitations of using a commodity wrist-worn accelerometerto detect face-touching behavior based on the specific motion pattern of raising one’s hand towards the face, detecting 82 outof 89 face touches with a false positive rate of 0.59% in a preliminary study.
ABSTRACT
According to the CDC, one key step of preventing oneself fromcontracting coronavirus (COVID-19) is to avoid touching eyes, nose,and mouth with unwashed hands. However, touching one’s faceis a frequent and spontaneous behavior—one study observed sub-jects touching their faces on average 23 times per hour. Creativesolutions have emerged amongst some recent commercial and hob-byists’ projects, yet most either are closed-source or lack validationin performance. We develop FaceOff—a sensing technique usinga commodity wrist-worn accelerometer to detect face-touchingbehavior based on the specific motion pattern of raising one’s handtowards the face. We report a survey (N=20) that elicits differentways people touch their faces, an algorithm that temporally ensem-bles data-driven models to recognize when a face touching behavioroccurs and results from a preliminary user testing (N=3 for a totalof about 90 minutes).
Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected].
Woodstock ’18, June 03–05, 2018, Woodstock, NY © 2018 Association for Computing Machinery.ACM ISBN 978-1-4503-XXXX-X/18/06...$15.00https://doi.org/10.1145/1122445.1122456
ACM Reference Format:
Xiang ‘Anthony’ Chen. 2018. FaceOff: Detecting Face Touching with a Wrist-Worn Accelerometer. In
Woodstock ’18: ACM Symposium on Neural GazeDetection, June 03–05, 2018, Woodstock, NY.
ACM, New York, NY, USA,5 pages. https://doi.org/10.1145/1122445.1122456
According to the Center of Disease Control and Prevention (CDC),one key step of preventing oneself from contracting coronavirus(COVID-19) is to avoid touching eyes, nose, and mouth with un-washed hands . Pathogens picked up by our hands can enter thethroat and lungs through mucous membranes on the face.However, touching one’s face is a frequent and spontaneousbehavior—Kwok et al . observed subjects touching their faces onaverage 23 times per hour [11], where 44% of the contacts weremade with a mucous membrane. To reduce face touching, creativesolutions have emerged amongst some commercial and hobbyists’projects since the outbreak of COVID-19. However, most areclosed-source and/or lack validation in performance.We develop FaceOff—a sensing technique using a commoditywrist-worn accelerometer to detect face-touching behavior basedon the specific motion pattern of raising one’s hand towards the https://immutouch.com/ https://blog.arduino.cc/2020/03/10/this-pair-of-arduino-glasses-stops-you-from-touching-your-face/ a r X i v : . [ c s . H C ] A ug oodstock ’18, June 03–05, 2018, Woodstock, NY Trovato and Tobin, et al. Figure 2: Our preliminary survey (N=20) shows that peopletouch a wide range of facial parts. Our goal is to detect thetouching of both mucosal and nonmucosal areas to stronglypromote the awareness of avoiding touching one’s face at all. face (Figure 1). We consider the touching of both mucosal andnonmucosal facial areas. Although it is touching the mucosal areathat might cause infection, detecting touching both types of areascan more strongly raise people’s awareness of avoiding touchingtheir face at all. In a survey, we asked 20 participants to describewhere they would naturally touch their face; the result shows awide range of facial areas (Figure 2).To detect face touching, we chose accelerometer—a commonand low-cost sensor available in most wrist-worn devices ( e.g. ,smart watches, fitness trackers). The use of accelerations has shownpromises in detecting body-tapping behavior without the need toinstrument the body [1], although detecting face touching has notbeen explored in prior work. We hypothesize that accelerometercan detect face touching by recognizing the hand’s motion pattern—the unique course of accelerations and orientations as the handmoves towards the face. However, one limitation is that accelerom-eter cannot detect the actual contact with the face. For example,adjusting one’s eyeglasses would appear highly similar to touchingthe eyes.Based on the reported face touching, we develop a data collectionprotocol. Due to the COVID-19 pandemic, we had limited access toparticipants for data collection. Thus we gathered training data onlyfrom the first author. We then develop an algorithm that temporallyensembles data-driven Random Forest classifiers to binarily detectwhether a person touches their face. We analyze and validate thisapproach in a preliminary study on three other participants, eachof which wore the device for 30 minutes with intermittent promptsto touch their face while conducting their own daily activities. Asa result, 82 out of 89 (92%) face touching actions were detected witha false positive rate of 0.59%. Contributions of this work are as follows. • To the best of our knowledge, the first reported evidenceof the potentials and limitations of detecting face touchingusing a commodity wrist-worn accelerometer; • A systematic protocol of data collection, training and testing,which can be adopted by future research to explore othersensing solutions for detecting face touching to combat theCOVID-19 pandemic. Demographic information reported in Appendix A Each participant thoroughly cleaned their hands and the physical objects they touchedprior to the study
There is a large body of work on wearable health sensing, for whichwe refer the readers to Pantelopoulos et al .’s survey [14]. Our reviewis focused on three prior research topics most related to our work.
Body-centric interaction leverages (intentionally) tapping ondifferent body parts, which can be used to trigger specific digitalinformation or functions [2, 6]. Similar examples include tapping onthe body during running or cycling to control one’s smart devices[5, 17], or moving a smartphone to the mouth for activating speechinput [19]. Chen et al . demonstrate using an inertial measurementunit (IMU) and a phone’s front camera to detect spatial interactionaround a user’s body [3], and later a single IMU alone to detecttapping the phone on a number of on-body locations [1]. However,detecting face touching has never been investigated.
Detecting eating/drinking , also involving a hand’s motionrelative to one’s face, has been explored in prior work using awrist-worn inertial sensor ( e.g. , [4, 15, 16]). However, detecting facetouching presents new challenges: (i) unlike eating, the personmight no longer be constrained in a seated position; and (ii) themotion pattern of one’s hand is no longer limited to delivering foodto the mouth but could vary across touching a range of facial parts.
Sensing personal hygienic behaviors is related to the pur-pose of detecting face touching and in the literature is dominatedby two specific activities: hand washing and tooth brushing. Forhand washing, various locations of wearable sensors have been ex-plored: Li et al . use IMU on a wrist-worn device to detect whether auser’s hand washing follows WHO’s 13-step recommendation [13].Zhang et al . developed a ring device with fluid sensors for real-timemonitoring of hand-hygiene compliance [20]. Kutafina et al . en-abled hand hygiene training in medical education using a forearmElectromyography (EMG) sensor [10]. For tooth brushing, Hong et al . used an accelerometer + RFID sensor combination mountedon the back of a user’s hand to recognize activities including toothbrushing [7]. Huang and Lin used a magnet attached to a manualtoothbrush and an off-the-shelf smartwatch to detect fine-grainedbrushing behaviors on individual teeth [8]. Wijayasingha and Loproposed a wearable sensing framework using IMU for monitoringboth hand washing and tooth brushing [18]. However, no priorwork has addressed sensing the hygienic behavior of face touching.
We collected accelerometer data from the first author (male, 32,right-handed) wearing an Apple Watch 2 (100 Hz sampling rate)on the left wrist.
The independent variable is
Behavior (Touch vs . No touch).For Touch, we consider where and how. For where to touch,we include various parts of the face based on CDC’s guidelines(eyes, nose, and mouth) as well as other frequently touched areasindicated in the aforementioned survey: hair, forehead, temple, ear,cheek, and chin. Since our face is symmetric, we also consider left vs . right side for certain facial parts ( e.g. , left vs . right ear). In termsof how to touch, we cover both transient touch ( e.g. , a quick scratchon the nose) and lingering touch ( e.g. , holding and rubbing the aceOff: Detecting Face Touching with a Wrist-Worn Accelerometer Woodstock ’18, June 03–05, 2018, Woodstock, NY chin). We also consider hand placement (before face-touching), fol-lowing and extending the experiment design in [1]. Specifically,we consider high, middle and low hand placement: for high place-ment the user sits and places the forearms on a desk in front ofthem; for middle placement the user sits and places the hands onthe laps; for low placement the user stands and lowers their armsnaturally around the waist. Different initial hand placements resultin different trajectories the hand travels towards the face.For No Touch, we adopt the set of general daily activities usedin [9, 12]: flipping magazines, jumping-jacks, typing on a keyboard,speaking while gesturing and washing hands and washing hands. The data collection was split into three sessions corresponding tothe three hand placement conditions for Touch. In between sessionsthe user took off and then put the watch back on, which accountedfor the possible variance of how the watch was worn ( e.g. , position,orientation, tightness).At the beginning of a session, the user thoroughly cleaned thewatch and washed and dried the hands. We then started with collect-ing data for Touch where the user was prompted with instructionson the watch to touch a facial part either transiently or linger-ingly. The user would tap anywhere on the watch screen, whichstarted the data collection of a facing touching trial. The user thenproceeded to touch the designated facial part. The data collectionended automatically after our empirically pre-defined 1.5 s window(which covers the time taken to raise one’s hand and engage it inthe touching of a facial part). Each facial part was touched eighttimes. For symmetric parts ( e.g. , eyes, ears) these trials were evenlysplit between left/right sides. The order of facial parts was random-ized to avoid temporal dependence of behavior. Further, across allthe trials, how to touch was evenly and randomly split betweentransient vs . lingering. For example, the user would be prompted to“touch left cheek lingeringly”.Next, we collected data for No Touch where the user simplyperformed each aforementioned activity for 30 s while the systemrandomly collected 10 samples. In total, we collected1 user × × per session: (8 trials per facial part × × × = data points To visualize the data, we first downsample each trial to 15 bins, eachcontaining a time interval of 0 . s of accelerometer data. Figure 1shows an aggregated view of the aforementioned collected data:each column in each chart corresponds to one bin that contains adistribution of accelerometer readings within the bin’s time interval.We can see how face touching and general activities exhibit distinctmotion patterns across all the axes (note that the axis alignment isconfigured for wearing the watch on the left wrist). For example,despite touching different facial areas, the X axis almost alwayspoints up, as shown in the latter half of the time window. For the Yaxis, the peak at the beginning of the time window is likely causedby the motion of rotating one’s forearm around the elbow and towards the face, where Y would first point up and then ‘flattens’when the hand is raised to the face level.Rather than individual bins, our featurization focuses on char-acterizing the entire 1.5 s window of data as a distribution. Specifi-cally, our features consist of the statistical summary (sum, mean,median, standard deviation, coefficient of variation, zero crossing,mean/median absolute deviation) and shape-related measurements(skewness and kurtosis). In total, we use these 10 features per axis × Since the goal of detecting face touching is prevention, it is bene-ficial to detect a facing touching action before its completion. Inother words, the question is whether we can use a smaller timewindow than 1.5 s . To investigate this possibility, we train a seriesof Random Forest models based on the collected data from the oneuser. Each model uses only a partial window of a trial’s data forinference, e.g. , at t = . s , a model f . s only uses data up to thatpoint, i.e. , between 0 and 1 . s . We compute the F1 scores from a10-fold cross validation. Time (s) F s c o r e Figure 3: F1 scores of using a partial time window of data( e.g. , for t = . s , a model only uses data from 0 to 1.0 s ) todetect face touching. As shown in Figure 3, as we ‘wait for’ more data, the F1 scoreexpectedly increases and beyond 1 . s the F1 score seems to flatten.Thus we select 1 . s as the starting point from which a model woulddetect face touching. Specifically, we train different models to pro-cesses different incoming data at T = { . s , . s , . s , . s , . s , . s } .At t ∈ T , the current data is x t and the corresponding model is f t : x → {− , } (1 for face touching and -1 for no face touching).A final result is obtained via voting where each model’s vote is itsresult weighted by the F1 score: sgn ( (cid:205) t ∈ T w t f t ( x t )) , where sgn isthe sign function, w t = exp ( λ ( F1 t − min T F1 )) and λ is constant toscale the difference amongst the F1 scores (we use λ = We recruited another three participants (male, 18; male, 22; female28; all right-handed). Two participants reported touching their faceat least once an hour wheres the other was unaware of the frequency.Each participant identified five most common ways they touchedtheir face: all three reported ear and nose; eye, cheek, chin andforehead were reported twice; only one participant reported hair. Implementation details about hyperparameter tuning is reported in the Appendix B. Due to the COVID-19 pandemic, we had to recruit the only three people living in thesame household as the first author oodstock ’18, June 03–05, 2018, Woodstock, NY Trovato and Tobin, et al.
P1 P2 P3 OverallFace touching detected 25/29 27/30 30/30 /89False positives rate 0.66% 0.57% 0.55% 0.59% Table 1: Preliminary testing results of ensembling modelsusing data at {1.0s, 1.1s, 1.2s, 1.3s, 1.4s, 1.5s}.
P1 P2 P3 OverallFace touching detected 23/29 27/30 30/30 80/89False positives rate 0.62% 0.56% 0.50% s of data at the end of the time window. Participants were asked to wear the watch on the left wrist ( i.e. ,the non-dominant hand) for a period of about 30 minutes whileperforming daily routine activities of their choice. P1 sat on a re-clined chair and browse her phone, P2 stood and walked aroundthe house while listening to lectures on the phone via headphones,and P3 sat down and worked on programming tasks on a laptopcomputer. Admittedly, this protocol only captured a brief snapshotof how well the system might work in the context of a specificuser-chosen daily activity; we consider a fully in-the-wild studywith more participants as future work (after the pandemic).Before the period started, each participant thoroughly washed/driedtheir hands and cleaned the watch and objects they expected totouch in the next 30 minutes. During the period, each participantwas randomly prompted 30 times (on average once per minute)to use their left hand to touch their face on the five facial partsthey identified earlier. Each prompt consisted of two steps. Firstlythe watch interrupted the participant’s activity with vibration thatprompted them to raise the watch and read the instruction of touch-ing a specific facial part. If a participant were to touch their faceimmediately, the motion would be unnatural as it would artificiallystart from a wrist-raising posture. Instead, we asked the partici-pant to press the ‘confirm’ button, and resumed their activity; thenin four seconds, the watch vibrated again, following which theparticipant would now touch their face at the specified part.
We used the same 1.5 s as the sliding window length at a rate offour FPS ( i.e. , a 83.33% overlap between subsequent frames). Facetouching was labeled on the three seconds of data after the secondvibration that prompted the participant to actually touch their face.The rest of the data was labeled as no face touching. We analyzed the logged data offline using models developed earlierfrom a different user than the three participants. The last threeminutes of P1’s data was lost due to technical issues, which lostone face touching trial and some no touching data. As shown in Table 1, our ensemble approach detected 82 outof 89 face touching actions. We found that sometimes we did nothave to wait until the end to finalize the vote result—a majorityvote could be formed at t < . s . Specifically, amongst the 82 facetouches we found, 18 were detected at t = . s , 28 were at t = . s and the rest at 1 . s . In comparison, if we were to use only the entire1.5 s of data at the end of the time window, the recall rate would beslightly lower (80/89), as shown in Table 2.We also compute the false positive rate, which is the percentageof no-face-touching instances labeled as face-touching. As shownin Table 1 & 2, the two approaches achieved similar false positiverates (with only a 0.03% difference).Overall, results show that it is feasible to detect face touchingusing a wrist-worn accelerometer: our ensemble approach achievedslightly higher recall rate without compromising the prevention offalse positives. Meanwhile, for over half the time (18+28), an earliermajority vote was able to preemptively determine a face touchingaction before collecting the entire window of accelerometer data. The inability to detect actual hand-face contact . Based on ourobservations, a number of false positives are caused by behaviorthat resembles touching one’s face, e.g. , scratching the back of one’sneck, raising eyeglasses, picking up a phone call, drinking water.However, it is a reasonable design choice to alert a user of such falsepositives, as their hand is still brought to close proximity to theface even without touching. It would be appropriately aggressiveto promote an awareness of trying to keep one’s hands off the faceas much as possible regardless of whether actual contact is made.
The current set of no-touch activities needs to be expanded :as future work explores other types of sensors, e.g. , proximity detec-tion between the hand and the face, it is also possible to use camerainstrumented in the environment, using rich visual informationto distinguish face touching from certain near-miss activities ( e.g. ,eating, drinking). Using cameras, it is also possible to annotatenaturally-occurring face-touching behaviors, which addresses thecurrently-controlled experiment task setting. However, privacy is-sues and line-of-sight constraints are two long-standing challengesfor camera-based solutions.
One-handed detection only is another apparent limitation.While it is uncommon to ask users to wear two watches, future workcan explore an alternate form factor, e.g. , wrist-band accelerom-eter ( e.g. , Fitbit-like devices), which would be more socially andeconomically appropriate to wear on both hands.
Feedback mechanisms to effect behavior change would bean important next-step now that we have demonstrated a proof-of-concept mechanism for detecting face touching. Future workshould investigate both visual and vibrotactile feedbacks that alerta person both at the onset of face touching for prevention and afterone touches the face for feedback that encourages behavior change.Longitudinal study should be conducted to track people’s behaviorchange given the detection and alerts of face touching.
Larger-scale data collection and user testing should be con-ducted (when available) to account for the possibly different wayspeople touch their face, to improve the data-driven models and toobtain statistically-significant performance evaluation results. aceOff: Detecting Face Touching with a Wrist-Worn Accelerometer Woodstock ’18, June 03–05, 2018, Woodstock, NY
REFERENCES [1] Xiang ‘Anthony’ Chen and Yang Li. 2016. Bootstrapping User-Defined BodyTapping Recognition with Offline-Learned Probabilistic Representation. In
Pro-ceedings of the 29th Annual Symposium on User Interface Software and Technology .ACM, 359–364.[2] Xiang ‘Anthony’ Chen, Nicolai Marquardt, Anthony Tang, Sebastian Boring, andSaul Greenberg. 2012. Extending a mobile device’s interaction space throughbody-centric interaction. In
Proceedings of the 14th international conference onHuman-computer interaction with mobile devices and services . ACM, 151–160.[3] Xiang ‘Anthony’ Chen, Julia Schwarz, Chris Harrison, Jennifer Mankoff, andScott Hudson. 2014. Around-body interaction: sensing and interaction techniquesfor proprioception-enhanced input with mobile devices. In
Proceedings of the16th international conference on Human-computer interaction with mobile devicesand services . ACM, 287–290.[4] Yujie Dong, Jenna Scisco, Mike Wilson, Eric Muth, and Adam Hoover. 2013.Detecting periods of eating during free-living by tracking wrist motion.
IEEEjournal of biomedical and health informatics
18, 4 (2013), 1253–1260.[5] Nur Al-huda Hamdan, Ravi Kanth Kosuru, Christian Corsten, and Jan Borchers.2017. Run&Tap: Investigation of On-Body Tapping for Runners. In
Proceedings ofthe 2017 ACM International Conference on Interactive Surfaces and Spaces . 280–286.[6] Chris Harrison, Desney Tan, and Dan Morris. 2010. Skinput: appropriating thebody as an input surface. In
Proceedings of the SIGCHI conference on human factorsin computing systems . 453–462.[7] Yu-Jin Hong, Ig-Jae Kim, Sang Chul Ahn, and Hyoung-Gon Kim. 2008. Activityrecognition using wearable sensors for elder care. In , Vol. 2. IEEE,302–305.[8] Hua Huang and Shan Lin. 2016. Toothbrushing monitoring using wrist watch.In
Proceedings of the 14th ACM Conference on Embedded Network Sensor SystemsCD-ROM . 202–215.[9] Runchang Kang, Anhong Guo, Gierad Laput, Yang Li, and Xiang ’Anthony’ Chen.2019. Minuet: Multimodal Interaction with an Internet of Things. In
To Appearat the ACM symposium on Spatial user interaction . ACM.[10] Ekaterina Kutafina, David Laukamp, and Stephan M Jonas. 2015. WearableSensors in Medical Education: Supporting Hand Hygiene Training with a ForearmEMG.. In pHealth . 286–291.[11] Yen Lee Angela Kwok, Jan Gralton, and Mary-Louise McLaws. 2015. Face touch-ing: A frequent habit that has implications for hand hygiene.
American journalof infection control
43, 2 (2015), 112–114.[12] Gierad Laput, Robert Xiao, and Chris Harrison. 2016. Viband: High-fidelity bio-acoustic sensing using commodity smartwatch accelerometers. In
Proceedings ofthe 29th Annual Symposium on User Interface Software and Technology . 321–333.[13] Hong Li, Shishir Chawla, Richard Li, Sumeet Jain, Gregory D. Abowd, ThadStarner, Cheng Zhang, and Thomas Plötz. 2018. Wristwash. In
Proceedings of the2018 ACM International Symposium on Wearable Computers - ISWC ’18 . ACM Press,New York, New York, USA, 132–139. https://doi.org/10.1145/3267242.3267247[14] Alexandros Pantelopoulos and Nikolaos G Bourbakis. 2009. A survey on wearablesensor-based systems for health monitoring and prognosis.
IEEE Transactionson Systems, Man, and Cybernetics, Part C (Applications and Reviews)
40, 1 (2009),1–12.[15] Edison Thomaz, Abdelkareem Bedri, Temiloluwa Prioleau, Irfan Essa, and Gre-gory D Abowd. 2017. Exploring symmetric and asymmetric bimanual eatingdetection with inertial sensors on the wrist. In
Proceedings of the 1st Workshop onDigital Biomarkers . 21–26.[16] Edison Thomaz, Irfan Essa, and Gregory D Abowd. 2015. A practical approachfor recognizing eating moments with wrist-mounted inertial sensing. In
Proceed-ings of the 2015 ACM International Joint Conference on Pervasive and UbiquitousComputing . 1029–1040.[17] Velko Vechev, Alexandru Dancu, Simon T Perrault, Quentin Roy, Morten Fjeld,and Shengdong Zhao. 2018. Movespace: on-body athletic interaction for runningand cycling. In
Proceedings of the 2018 international conference on advanced visualinterfaces . 1–9.[18] Lahiru NS Wijayasingha and Benny Lo. 2016. A wearable sensing framework forimproving personal and oral hygiene for people with developmental disabilities.In . IEEE, 1–7.[19] Zhican Yang, Chun Yu, Fengshi Zheng, and Yuanchun Shi. 2019. ProxiTalk:Activate Speech Input by Bringing Smartphone to the Mouth.
Proceedings ofthe ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
3, 3 (2019),1–25.[20] Xin Zhang, Karteek Kadimisetty, Kun Yin, Carlos Ruiz, Michael G Mauk, andChangchun Liu. 2019. Smart ring: a wearable device for hand hygiene compliancemonitoring at the point-of-need.
Microsystem Technologies
25, 8 (2019), 3105–3110.
A SURVEY PARTICIPANTS INFORMATION
We disseminated a face touching survey via an business commu-nication platform amongst members of our research group andreceived 20 responses. The participants were between 19 to 28 agesold. There were five females, 14 males and one non-binary. Elevenparticipants reported touching their face at least once per hour,six at least once per minute, one at least once per day and twounaware how often they touched their face. Participants were alsoasked to describe five or more different ways that they would nor-mally touch their face in their everyday lives, the result of whichwe show in Figure 2. As in any other survey, asking participantsto estimate frequency of an activity subjects to inaccuracy of thememory. However, our goal is not to compute an exact frequencybut to elicit a list of commonly-occurring face touching behaviors.
B MODEL HYPERPARAMETER TUNING
We implemented Random Forest Models using the scikit-learnPython library . To further improve the model, we used the trainingdata set to perform a hyperparameter tuning by first performinga randomized search to narrow down to ranges of parameters,within which we then employed a grid search to pinpoint spe-cific optimal parameters. We repeated this process five times toobtain five sets of parameters for models corresponding to T = { . s , . s , . s , . s , . s , . s } , as shown below. We use scikit-learn’sdefault values for the rest of the parameters. t = 1.0 1.1 1.2 1.3 1.4 1.5bootstrap False False False False False Falsemax_depth 200 150 150 300 200 150max_features log2 log2 log2 log2 log2 log2min_samples_leaf 2 1 1 1 4 2min_samples_split 3 2 2 3 4 3n_estimators 150 150 300 200 100 300 C OPEN SCIENCE
The training/testing datasets, model and source code are availableat https://hci.ucla.edu/faceoff to spur the future development offace touching detection to combat the COVID-19 pandemic.9