Tale of Seven Alerts: Enhancing Wireless Emergency Alerts (WEAs) to Reduce Cellular Network Usage During Disasters
Demetrios Lambropoulos, Mohammad Yousefvand, Narayan Mandayam
TTale of Seven Alerts: Enhancing Wireless Emergency Alerts(WEAs) to Reduce Cellular Network Usage During Disasters
Demetrios Lambropoulos
WINLAB, Rutgers [email protected]
Mohammad Yousefvand
WINLAB, Rutgers [email protected]
Narayan Mandayam
WINLAB, Rutgers [email protected]
ABSTRACT
In weather disasters, first responders access dedicated communica-tion channels different from civilian commercial channels to facili-tate rescues. However, rescues in recent disasters have increasinglyinvolved civilian and volunteer forces, requiring civilian channelsnot to be overloaded with traffic. We explore seven enhancements tothe wording of Wireless Emergency Alerts (WEAs) and their effec-tiveness in getting smartphone users to comply, including reducingfrivolous mobile data consumption during critical weather disasters.We conducted a between-subjects survey (N=898), in which par-ticipants were either assigned no alert (control) or an alert framedas Basic Information, Altruism, Multimedia, Negative Feedback,Positive Feedback, Reward, or Punishment. We find that Basic In-formation alerts resulted in the largest reduction of multimedia andvideo services usage; we also find that Punishment alerts have thelowest absolute compliance. This work has implications for creatingmore effective WEAs and providing a better understanding of howwording can affect emergency alert compliance.
KEYWORDS wireless emergency alerts, survey, emergency communications,altruism, positive reinforcement, negative reinforcement, punish-ment
Mobile device ownership has been steadily on the rise, especiallyas our remedial tasks rapidly integrate into these devices. As of2019, 96% of U.S. adults currently own any cellphone type, and81% own a smartphone [75]. As technology enhances, the numberof bandwidth-intensive tasks a smartphone can perform allowssmartphone users to consume more network bandwidth in shortertime durations. The traffic expected to be going through U.S. cellularnetworks in 2021 is around 5 .
57 Exabytes per month, with 343 . . In compliance with the FCC, the current network designed by FEMAis the Integrated Public Alert Warning System Open Platform forEmergencies (IPAWS-OPEN), which designates how emergencyalerts are sent and received by the designated party. IPAWS-OPEN’shierarchy is shown in Figure 1 with the alerting parties to the leftand the recipients on the right. This architecture was constructedby FEMA [36] to structurally organize and standardize how thenational public can be appropriately alerted in times of emergen-cies. Common Alerting Protocol (CAP) messages from the alertingauthorities all pass through the IPAWS-OPEN router, which willdistribute the CAP messages to the appropriate alert dissemina-tion channels. Our paper will specifically focus on the WirelessEmergency Alerts block located under the Private Sector Systemsdissemination channel.
WEA messages were initially introduced under the Warning, Alert,and Response Network (WARN) Act of 2006 [64] and with WEA 1.0became functionally available after 2012 [34]. WEA messages wereinitially limited to 90 characters complying with the CAP messagesat the time and were absent of graphic elements in messages. WEA2.0 upgrades were announced in 2016, proposing increasing themessage length to 360 characters, allowing transmission of embed-ded links or multimedia, and supporting Spanish translated alert a r X i v : . [ c s . H C ] F e b igure 1: IPAWS-OPEN architecture.Figure 2: WEA message received for Flash Flood alert July2020. messages [32]. Cell sector geo-targeting of WEAs and the additionof embedded links were introduced in 2017 [21]. Proposed Spanishfunctionality officially released in November 2018, with the WEA2.0 being fully functional by May 2019. WEA 3.0 was outlined inearly 2018, proposing changes to improve alert delivery accuracy bynarrowing geo-targeting requirements and providing alert preser-vation for 24 hours [33]. In 2018, alerting authorities sent 7098 WEAmessages [23] across the United States alerting about emergencyweather events. Following the events of 9/11, FEMA went with AT&T [37] to ini-tialize a network for first responders to be able to communicateover LTE on Band 14 of the 700
𝑀𝐻𝑧 spectrum [73]. AT&T titledtheir service as FirstNet, which should, in critical scenarios, allow first responders to communicate through a dedicated channel withincreased priority over civilian traffic. However, even with a dedi-cated network, first responders have utilized the help of civiliansusing their cell phones without FirstNet capabilities to assist inrescue efforts [103]. This is under the assumption that all cellulartowers are not damaged or destroyed due to weather conditions,and that the remaining towers are not overloaded with increasedtraffic due to civilians performing bandwidth-intensive tasks suchas uploading videos of weather events or streaming media coverageover their cellular data.
The design of WEAs is critical to their effectiveness, with the re-cipients appropriately following received instructions. Previouswork has shown that if enough information is not provided dur-ing emergencies, people will seek out desired knowledge of theemergency, potentially causing delays in public response [43, 48].WEA messages are terse messages by design, meaning that thecustomization that alerting authorities can implement is limited.When customizing messages, alerting authorities must take cautionnot to degrade the accuracy or understandability of the alerts orwarnings sent out to people in the emergency location. Previousresearch has examined the best practices for crisis alerts on socialmedia platforms with terse messaging [7, 14].
Previous research has indicated that the social and cognitive re-sponses to warnings and alert messages can be modeled as a multi-stage process [51, 62, 106] with the base of this research drawingfrom theories of collective behavior [10], emergent norms [98], and easoned action [39]. Upon receiving an alert message, there aresix characteristics of a person’s response to the received messageto determine its effectiveness:(1) Reading the alert(2) Understanding the risk and severity of an alert provided [18](3) Believing the alert to be credible [7, 52, 82](4) Personalizing the emergency with themselves or others aroundthem [7, 52](5) Deciding to take protective action [18, 52, 108](6) Confirming the received alert [50, 60–62, 85]Messages which are both accurate and requiring low cognitivedemand to understand have a higher probability of recipients takingappropriate action or response [93].Trust is an undeniably important factor in behavioral response,as a lack thereof may lead people to ignore the contents and risksassociated with an alert or warning [26, 27, 40, 91, 100, 109]. If thetrust between a person and the alerting authority is diminished dueto repeated false-alarms or non-factual information, people mayexhibit the ’cry-wolf’ phenomenon by becoming non-compliantto all future alerts due to believing them to be false [12, 25, 86]. Ifpeople feel that they no longer trust alerts, they have the abilityon newer devices to disable alert notifications from amber alerts,extreme threats, and severe threats.Avoidance of frequently repeated alerts when possible must betaken as long-term exposure to repeated messages may cause therecipient to become habituated or suffer message fatigue resultingin the inability to grasp and illicit an appropriate response to amessage or disregarding the contents altogether [95, 105]. Thecognitive phenomenon of habituation is where repeated stimulibecome increasingly ineffective at eliciting recipients’ attention [55,107]. Recipients who have become habituated to an alert may ignorethe alerts without comprehending or reading the content [1, 11, 12,29, 56, 90]. Design of Alerts
The design of effective alerts needs to ease decision-making [35]. Ifthe WEA contents are too complicated, requiring too much cogni-tive demand, the desired behavior might often be lost. The additionof visual stimuli to alerts may improve the actionability of receivedalerts [6]. Color has been shown to effectively affect our cogni-tive system [30] and even our emotions [65]. The color red alsohas been shown to be linked to performance attainment [30]. Theaddition of sensory inputs such as vibration strength and volumelevel to emergency alerts can assist sensory impaired civilians torecognize and take action on the alert [63]. The alerts used in oursurvey followed the guidelines [58] for best practices in designingWEAs however, due to limitations of the current WEA protocol, nofeatures revolving around color have been added to the text of thealerts delivered to participants.For our study, the alerts’ design did not account for users becom-ing habituated to an alert as they were only presented a single WEA.However, an alert design that would best effectively elicit a positiveresponse through a single alert is optimal as repeated alerts maycause habituation. To avoid message fatigue, the contents of theWEAs we designed for the study were to be easily understandableand structured so that the participant knows what is going on, what is expected of them, and based on the alert they were assigned towhat would happen if they comply.Geo-targeting as a design feature for WEAs is the act of sendingemergency alerts to only people in affected areas with higher granu-larity than city-wide alerting. Targeted alerts have been shown to beeffective in mitigating exposure to contaminated water areas [92],and previous research has shown that geo-targeting of alerts can beupgraded utilizing a smartphone’s location history [49]. Withoutusing geo-targeting of emergency alerts and with WEAs currentlimitations, civilians in unaffected areas may end up disabling thealerting service on their smartphones [49], and frequent exposuremay damage the trust and image civilians hold of the alerting au-thority [12].
Use of Social Media Disasters
Civilians naturally seek out sources of confirmation when unex-pected information is received or events presented to them [5]. Inthe design of Australia’s Public Emergency Warning System, it wasnoted that if civilians did not receive confirmation, there would bea possibility of them dismissing the emergency alert [57]. Previ-ous works have examined this phenomenon where social media ormicroblogging services emerged as destinations to collect informa-tion on natural or humanmade disasters such as: 2001 World TradeCenter Attacks [79], 2004 Indian Ocean Tsunami [53], 2005 Hur-ricane Katrina [67, 80, 97], 2007 Virginia Tech shootings [69, 102],2007 California Wildfires [81, 94], 2008 Sichuan earthquakes inChina [74], and the 2008 Nothern Illinois shootings [68].Methods of communications are vulnerable to their surroundinginfrastructure and electric grids [67, 83]. Amid disasters, civiliansattempt to use their phones to share their status with their friendsand family [94]. During the 2017 Hurricane Maria in Puerto Rico,civilians traveled to less effected areas for cellular service [8], andothers tried switching telecom providers to those whose infrastruc-ture was less affected by the hurricane [13].Before and after the internet era, civilians have spontaneouslyvolunteered to assist in disasters [28, 41, 47, 96]. People will formaltruistic communities during disasters to assist with those ef-fected [38, 96].The use of information and technology in rescue and recov-ery operations during disasters is known as crisis informatics, aterm coined in 2007 [42]. During disasters, social media has been amedium to exchange relevant information to the context of theirsituation [9, 83, 94, 101]. As such a medium, first responders canutilize social media to crowdsource specific tasks to assist rescueand recovery operations [24, 54]. Communication between first re-sponders and civilians through social media has shown promisingresults [77]. Volunteer civilians can also utilize social media forcoordination among other volunteers [45, 76, 88, 104]. With theamount of information being provided by social media sources, firstresponders have the ability to detect new issues arising as a resultof an ongoing disaster [72, 78].The use of social media during disasters has many benefits; how-ever, it is not without flaws. Social media sources lack credibilityfor their information [44]. Reliable access to the information sourceand the source not becoming overloaded with information cannot e guaranteed to first responders [59]. Methods to mitigate infor-mation overload during disasters has been examined and shownsuccessful by applying filters [46]. Organizations behind popularsocial media lack resources and guidance for first responders to,optimally, utilize the information source [71].To the best of our knowledge at the time of writing this paper,the only prior research on the reduction of mobile usage beforeand after receiving a WEA alert was presented as a poster [110]and demonstrated through a survey that participants might notbe willing to reduce their mobile usage. In this paper, we designseveral alternative framings for WEAs to increase user complianceto message contents. The seven alerts designed were containedbasic information similar to current WEAs, altruism to comparewith previous research, multimedia, negative feedback, positivefeedback, reward, and punishment.Our contributions are: (1) we use a simple categorization of 12most used mobile apps based on their data rate, in order to cap-ture/measure users daily traffic, (2) we also show that the effective-ness of WEAs is varying depending on the wording and objective,for example, basic information provides the largest reduction inhours using multimedia and video service tasks, using punishmentin alerts can diminish compliance, positive reinforcement has themost considerable absolute compliance to alerts. We recruited a total of 898 participants through Amazon’s Mechan-ical Turk (MTurk) to take part in a survey observing their mobileusage behavior before and after receiving a WEA message in ahypothetical critical weather disaster. Forty-four participants wereremoved from the analysis due to failing the attention check orchoosing to skip required responses, leaving a final sample sizeof 854. Participants’ recruitment criteria were to be fluent Englishspeakers, smartphone owners, and residents of the United Statesof America. The duration of the study was determined to be ap-proximately 20 minutes from the previously launched Pilot study,so to appropriately match the duration with the nation’s mini-mum wage [66] participants were awarded $2 .
50 for completingthe survey. Men comprised 51% of the sample collected, and womencomprised the remaining 49% of the sample. Ages ranged from 18 to65+, with a majority of the participants in the ranges 25 − ( . ) and 35 − ( . ) . The education background of participants wasdiversely distributed between Less than High School ( . ) , HighSchool ( . ) , Associates Degree ( . ) , Bachelors Degree ( . ) , and Graduate School ( . ) .A between-subjects design was made, dividing participants intoone of 7 groups or a control group where no alert is received at all.The seven alert types were Basic Information, Altruism, Multimedia,Negative Feedback, Positive Feedback, Reward, and Punishment.The message displayed to the participants can be seen in Figure 4;each participant received only one of the alerts unless they wererandomly assigned to the control group. Those in the control groupreceived no alert message and were displayed the following "Duringnatural disasters like hurricanes, after observing the change inenvironment, how frequently do you think you would performeach of these tasks?" Of the remaining 854 participants, 104 were assigned to the Altruism alert, 102 were in the Multimedia alert,111 were in the Basic alert, 107 were in the Negative Feedback alert,107 in the Positive Feedback alert, 111 were in the Punishment alert,107 were in the Reward alert, and 103 were in the control groupreceiving no alert during the study. First, participants completed an informed consent form approved byour University’s Institutional Review Board. In the next stage in thestudy, participants reported the amount of time they best estimatedto perform each of the 12 bandwidth-intensive tasks on an averageday and were tasked with categorizing tasks according to howmuch network resources are consumed in terms of their descriptivelabels "Very Low (1-10 texts)" to "Very High (10,001+ texts)". Thenumber of text messages listed next to each of these options isthere to help participants conceptualize what the descriptive labelmeans in terms of data, as a well-known reference, consideringthat some technology illiterate participants may not be familiarwith the notion of bandwidth as demonstrated in Figure 5. Allparticipants received this stage before being randomly assigned toeither the control group or one of the seven groups, which representpossible improvements that can be applied to the WEA branch ofIPAWS-OPEN.After being randomly assigned to their groups, participants arethen being asked, for each of the 12 bandwidth-intensive tasks,whether they would now use the task "More than before the alert,""Same as before the alert," "Less than before the alert," or "Never (Iwon’t perform this task at all)." To test the reliability of responsesand to make sure that participants were appropriately answeringthis to their utmost ability, we required users to also respond withhow long they estimated they would utilize the 12 tasks in a similarfashion to the first part of the survey, and paired an attention checkin the middle of the survey. The participants’ response on how longthey would perform tasks after receiving alerts (More than before,Same as before, Less than before, or Never) helps us to calculatethe total time duration of users usage of each application, whichthen is interpreted as the estimated data of users from using eachapplication.We categorized participants’ compliance to the alert using threelevels for each task, which includes partial compliance ("Less thanbefore the alert" is selected), full compliance ("Never (I won’t per-form this task at all)" is selected), and no compliance ("Same asbefore the alert" or "More than before the alert" is selected). Also,we characterize the situation in which the participants show fullcompliance for all the tasks as the absolute compliance level.
As shown in Figure 3, the tasks which smartphone users performcan divide into three categories: heavy tasks, medium tasks, andlight tasks. Heavy tasks involve those tasks which are video ormultimedia-related, consuming the most network resources, andquickly consuming multiple MegaBytes (MBs) of data within aminute of use. Heavy tasks also include social media such as Face-book, Instagram, and Twitter, as they are frequently saturated withvideo content. Medium tasks, such as uploading photos or stream-ing music, are a large part of daily smartphone use consuming a able 1: Participant DemographicsFigure 3: The bandwidth intensive tasks which can be performed on a smartphone are organized to categories of Heavy Tasks,Medium Tasks, Light Tasks. The average data rate per second is provided next to each task and adapted from usage datacalculators from AT&T [4], Sprint [87], and Verizon [99] as they make up a majority of cellular subscriptions in the US [89]. magnitude of data less than those in the Heavy task category. Lighttasks are those tasks that consume such a small amount of networkresources from cellular towers that having 100% of users connected to a tower perform the task should not overload the tower capac-ity. The bandwidth consumed for each of the tasks were extractedfrom averages provided by AT&T [4], Sprint [87], and Verizon [99].The 12 bandwidth-intensive tasks were created by aggregating the ifferent service providers’ application categories to build an inclu-sive list of all types of mobile applications; however, the novelty ismore in using these 12 categories in estimating the total generatedcellular traffic of participants. The wording of alerts serves the objective for each alert type, forexample, altruism. The list of alert types in this survey, as well astheir respective wording, is derived from consulting with a team ofprofessors from our University’s Psychology Department.
Basic Information Alert.
As seen in Figure 4(a), basic infor-mation is provided to participants stating just that there is an emer-gency and that the participant should refrain from using their de-vice. This alert is similar to alerts currently received when weatherconditions are alerted with an added message of reducing theirnetwork usage.
Altruism Alert.
The Altruistic alert that participants could beassigned to appeals to users that they are personally going to bene-fit the aid of victims of emergencies by restraining from using theirmobile devices. Altruistic messages have previously been noted asan effective method for reducing usage during times of emergen-cies [110].
Multimedia Alert.
The Multimedia alert is utilizing the newestaddition to WEAs that they have the capability of adding imagesto weather alerts. To determine the effectiveness of this addition,we designed an alert that provides basic information while request-ing that the user restrains from utilizing their mobile device andproviding an image of a weather disaster.
Negative Feedback Alert.
The alert attempts to utilize Neg-ative Feedback by having the user reduce their mobile usage inresponse to avoiding the adverse stimuli of potentially contributingto the suffering of victims in an emergency. The term NegativeFeedback comes from the work of B. F. Skinner in 1963 [84].
Positive Feedback Alert.
The Positive Feedback alert providesinformation that the participant should refrain from cell phoneusage, and they will know that they are helping out other civiliansas positive stimuli. The wording is similar to that of the Altruismalert; however, it was carefully chosen to demonstrate the effectsthat wording has on WEA compliance.
Reward Alert.
The Reward alert tries to incite the participantwith financial benefits for complying with the WEA during anemergency. Participants are incentivized with a discount off theirnext phone bill, dependent on how much they comply with thealert.
Punishment Alert.
The Punishment alert added a punishmentmessage to the basic information. Users were told that they neededto restrict their device usage until further notice, and their pun-ishment for denying this is that they would lose access to theirservice. This punishment is to incentivize fear in order to obtaincompliance.
Participants were 49 .
2% and 50 .
8% Female and Male, respectively,comparing with U.S. Census Data [17], with the division being around 50 .
8% and 49 .
2% Female and Male, respectively. The U.S.population’s ethnicity is about 76 .
3% White and 23 .
7% other [17];similarly, our sample population is 82 .
4% White and 17 .
6% other.Although in terms of age [15] and education [16] of the participants,we have observed some divergence from the general U.S. population,which statistically could be a result of having a smaller samplespace.The average data predicted to be used per month per smartphonein the US through 3G/4G/LTE is around 13 GB in 2019, equatingto around 0 .
45 GB/day, not including mobile data that is passedthrough WiFi networks [31]. Using the estimated data rates inFigure 3, the sum of all participants’ predicted usage per monthis around 70 . .
34 GB/day. To understand thedivide of mobile traffic between how much traffic is passed throughcellular networks or WiFi, we find from [20] that an average userwill consume around 25% of their mobile traffic through cellularand the other 75% through WiFi. Applying this ruling to participantdata, the average mobile data consumed on the cellular networkper month is around 17 . .
58 GB/day.This rate is assuming that not all videos are viewed over a WiFinetwork and that the participants continue to use, on average, 8 . To better understand participants actions, we observe their under-standing of how much impact the usage of the 12 mobile applicationtasks have on a network comparing with basic text messages. Par-ticipants were asked for each task: "On average, how do you thinkthis task consumes network resources (bandwidth) per second ascompared to sending text messages? (max length text with 160characters)." They were then instructed to choose from the options:"Very Low (1-10 texts)," "Low (11-100 texts)," "Average (101-1000texts)," "High (1001-10,000 texts)," and "Very High (10,001+ texts)."Among all the usage tasks in question, people had the best un-derstanding (44 . Compliance to alerts was explored as partial compliance, full com-pliance, and absolute full compliance. Partial compliance, as seenin Figure 7(a), was defined for participants that would reduce theirusage of particular tasks once they saw their alert. Full compliance,as seen in Figure 7(b), was defined as users who would completelyrestrict all usage for particular tasks. Absolute compliance, as seenin Figure 6, was defined for participants who would stop the usageof all heavy, medium, and light tasks until they are further updatedon the situation. a) Basic Alert (b) Altruism Alert (c) Multimedia Alert(d) Negative Feedback Alert (e) Positive Feedback Alert (f) Reward Alert(g) Punishment Alert Figure 4: Alerts received by individuals during the survey. Participants were assigned to one of the alerts or a control condition.
Participants who were assigned Negative Feedback and BasicInformation alerts demonstrated the highest percentage of compli-ance to their alerts, as can be seen in Figures 7(a) and 7(b). In regardsto the heaviest task Video Streaming, 91 .
11% of information partici-pants claimed they would at least partially comply with restrictions,and 83 .
33% would fully comply with fully restricting their usage.Participants showed the least compliance for SMS, Web Browsing,Audio Calling, and Social Media usage after receiving an alert whichfollows prior research that civilians will increase usage of informa-tion finding tasks after disasters [53, 67–69, 74, 80, 81, 94, 97, 102].
A one-way multivariate ANOVA was conducted to compare the ef-fect of the independent variable (IV) participants self-reporting thatthey read WEAs received on their smartphone on the dependentvariables (DVs) their time reduced using heavy, medium, and lightbandwidth-intensive tasks during weather disasters. There was astatistically significant effect of reading alerts on the reduction ofbandwidth-intensive tasks, 𝐹 ( , ) = . 𝑝 < . Λ = . 𝜂 = . igure 5: Percentage of Correct understanding of data con-sumption.Figure 6: Absolute Compliance by Alert Type Participants who were assigned to any alert type, on average, re-ported that they would reduce their Heavy task consumption by91.68 minutes after receiving a WEA on their smartphone. This isin contrast to those participants who received no alert at all but in-formed of bad weather conditions outside reported that they wouldincrease their usage of Heavy tasks by an average of 15.83 minutes.These findings highlight the results observed in Figures 8, 9, and 10,which shows that using any WEA regardless of the type coulddecrease the users generated cellular traffic as opposed to no alert;hence any alert is better than no alert.
Light tasks (GPS, SMS), Medium tasks (Gaming, Upload Photo, Gen-eral, Audio Call, Web Use, Music), and Heavy Tasks (Social Media,Video Call, Upload Video, Video Stream) were grouped to see ifthere any significant effect from any of the alerts and three taskcategories. Three omnibus ANOVA tests were conducted on thechange in minutes of the three task categories to determine if therewould be interest in delving deeper into alerts’ effects on the overallcategories. It was noted that there was no significant difference (a) Partial Compliance by Usage Task(b) Full Compliance by Usage Task
Figure 7: Full and Partial Compliance to received alerts between alerts groups in the Light task category 𝐹 ( , ) = . 𝑝 > .
05. Medium and Heavy tasks were found to have signifi-cant difference between alerts 𝐹 ( , ) = . 𝑝 < .
000 and 𝐹 ( , ) = . 𝑝 < . 𝑡 = . , 𝑝 < . 𝑡 = . , 𝑝 < . 𝑡 = . , 𝑝 < . 𝑡 = . , 𝑝 < .
00, a Pun-ishment alert 𝑡 = . , 𝑝 < .
00, a Positive Feedback alert 𝑡 = . , 𝑝 < .
00. Receiving an Altruistic alert showed no sig-nificant difference in the reduction of minutes of utilizing MediumTasks from receiving no alert; 𝑡 = . , 𝑝 > .
05. Receiving igure 8: Estimated reduction of data consumed in GigaBytes (GBs) by Alert TypeFigure 9: Hours Reduced per Alert Type for Medium Tasks just an alert containing information showed a significance in thereduction of network consumption during an emergency than theAltruistic alert with 𝑡 = . , 𝑝 < .
00. The results of thesepairwise t-tests can be graphically observed in Figure 9.For Heavy Tasks, between being assigned any alert and not, re-ceiving an alert containing Basic Information showed the largestsignificant reduction in minutes of utilizing bandwidth-intensivetasks during a hypothetical emergency; 𝑡 = . , 𝑝 < . 𝑡 = . , 𝑝 < . 𝑡 = . , 𝑝 < . 𝑡 = . , 𝑝 < . 𝑡 = . , 𝑝 < . 𝑡 = . , 𝑝 < .
00. As with the Medium Tasks, receiving an Altruis-tic alert has no significant difference from receiving no alert at all; 𝑡 = . , 𝑝 > .
05. These findings consistently hold to altruism’s
Figure 10: Hours Reduced per Alert Type for Heavy Tasks possible exclusion from possible enhancements to the wording ofWEAs. Results can graphically be seen in Figure 10.
In conclusion, we have used 12 categories of mostly used cell phoneapplication types to estimate users’ daily traffic and to see theimpact of alerts on reducing this. It has been observed that providingany alert to users is more helpful with reducing the load on thecellular network than no alerts, as it’s been shown in the results.Also, the different wordings of alerts and their objectives could havean impact on their effectiveness in reducing non-essential cellulartraffic. We noticed that just providing basic information regardingthe disaster can significantly reduce cellular network consumptionfor both Medium and Heavy Tasks, which are the main contributersto cellular traffic; also, we have seen that users who usually readalerts showed higher compliance levels. Appealing to Altruism n WEA alerts showed little at all ability to reduce participantsnetwork consumption, which is consistent with prior research onthe topic [110]. Two possible explanations for altruism’s lack ofeffectiveness could be the addiction of users to cellphone usage andpeople’s natural habit of seeking news through different cellphoneapplications during disasters.Regarding the relationship between message lengths of WEAsand the recipients’ compliance, the most effective alert type wasthe Positive Feedback alert, which has a higher text length com-pared to the Basic Information alert. However, larger alerts do notnecessarily increase compliance; as we can see, the Punitive alertshave much less compliance compared to Basic Information alerts.The effectiveness of alerts depends on multiple variables, as dis-cussed in the Related Work. The lengths could have an impact onthese variables; for example, if the alerts are too long, there is anincreased probability of users not reading alerts.Current WEA messages that are frequently sent, as seen in Fig-ure 2, does not inform recipients that they should restrict from friv-olous smartphone usage during emergencies. The enhancementsproposed in this paper showed significant promise at reducing mo-bile data when compared with participants who received no alert. Inaddition, the proposed modifications to WEAs in this paper are pos-sible with no extra hardware modification but with the improveddesign of the message body on the part of alert authorities whenconstructing CAP messages to send through IPAWS-OPEN. Pros of our crowdsourced approach are that it allows us to gainan initial understanding of possible enhancements to WEAs andpotentially reduce bias and increase validity as participants werenot interacted with directly [70].Possible limitations of the survey are that we did not have away to measure the effect duration of alerts and the durations ofusage specified by users are rough estimates of their daily usage.A possible future approach would to be to conduct a field studyto actually determine how long compliance will last after receiv-ing an alert and whether this matches the predicted behaviors ofparticipants or in the case of hypothetical situations we shouldfind ways to interpret data in a more realistic fashion and removepotential biases on the survey. This will also provide the abilityfor us to determine how long after receiving the alert does it takefor a user to: (1) read the alert and (2) actually reduce usage afterreading. As for the second limitation, this would also be solvedthrough a field study so we would be able to obtain mobile datausage through participant devices accurately and also determinehabituation effects of WEAs after repeated exposure.
ACKNOWLEDGMENTS
This work is supported in part by the NSF under Grant No. ACI-1541069. The authors thank Arnold Glass and Margaret Ingate fromthe Rutgers Psychology department for their insights and commentsthat improved the design of the survey used in this study.
REFERENCES [1] Devdatta Akhawe and Adrienne Porter Felt. 2013. Alice in Warningland: A Large-Scale Field Study of Browser Security Warning Effectiveness. In { USENIX } Security Symposium ( { USENIX } Security 13)
Computer Networks
Disaster Response: Principles of Preparation and Coordination .Canadá. CV Mosby Company.[6] Hamilton Bean, Brooke Liu, Stephanie Madden, Dennis Mileti, Jeanette Sutton,and Michele Wood. 2014. Comprehensive Testing of Imminent Threat PublicMessages for Mobile Devices.
College Park: National Consortium for the Study ofTerrorism and Responses to Terrorism (2014).[7] Hamilton Bean, Brooke F Liu, Stephanie Madden, Jeannette Sutton, Michele MWood, and Dennis S Mileti. 2016. Disaster Warnings in Your Pocket: How Audi-ences Interpret Mobile Alerts for an Unfamiliar Hazard.
Journal of Contingenciesand Crisis Management
AustralianJournal of Emergency Management
27, 1 (2012), 27.[10] Herbert Blumer. 1951. Collective Behavior. AM Lee, ed.
New Outline of thePrinciples of Sociology (1951), 166–222.[11] Rainer Böhme and Jens Grossklags. 2011. The Security Cost of Cheap UserInteraction. In
Proceedings of the 2011 New Security Paradigms Workshop . 67–82.[12] Lori EA Bradford, Blessing Idowu, Rebecca Zagozewski, and Lalita A Bharadwaj.2017. There is No Publicity Like Word of Mouth... Lessons for CommunicatingDrinking Water Risks in the Urban Setting.
Sustainable Cities and Society
Journal of Contingencies and CrisisManagement
Journal of Homeland Security andEmergency Management
Proceedings of the 2017 ACM Conference on Computer SupportedCooperative Work and Social Computing . 1290–1303.[25] Kirstin Dow and Susan L Cutter. 1998. Crying Wolf: Repeat Responses to Hurri-cane Evacuation Orders. (1998).[26] Thomas E Drabek. 1969. Social Processes in Disaster: Family Evacuation.
Socialproblems
16, 3 (1969), 336–349.
27] Thomas E Drabek and John S Stephenson III. 1971. When Disaster Strikes 1.
Journal of Applied Social Psychology
1, 2 (1971), 187–203.[28] Russell Rowe Dynes. 1970.
Organized Behavior in Disaster . Heath LexingtonBooks.[29] David Egilman and Susanna Rankin. 2006. A Brief History of Warnings. In
Handbook of warnings . CRC Press, 33–42.[30] Andrew J Elliot, Markus A Maier, Arlen C Moller, Ron Friedman, and Jörg Mein-hardt. 2007. Color and Psychological Functioning: The Effect of Red on Perfor-mance Attainment.
Journal of experimental psychology: General
Proceedings of the eighth symposium on usable privacy and security
Response to Disaster: Fact Versus Fiction & its Perpetuation:The Sociology of Disaster . University Press of America.[39] Martin Fishbein and Icek Ajzen. 1977. Belief, Attitude, Intention, and Behavior:An Introduction to Theory and Research. (1977).[40] Charles E Fritz and Eli S Marks. 1954. The NORC Studies of Human Behavior inDisaster.
Journal of Social Issues (1954).[41] Charles E Fritz and John H Mathewson. 1957.
Convergence Behavior in Disasters:A Problem in Social Control . Number 9. National Academy of Sciences-NationalResearch Councel.[42] Christine Hagar. 2007. The Information Needs of Farmers and Use of ICTs.
FromMayhem to Meaning: Assessing the Social and Cultural Impact of the 2001 Foot andMouth Outbreak in the UK (2007).[43] H Hardin. 2017. Risk and Crisis Communication Lessons Learned From The 2016Gatlinburg Wildfires: Learn From The Past... Prepare for The Future. In
Proc.,Emergency Preparedness and Hazmat Response Conf. Pittsburgh: PennsylvaniaRegion , Vol. 13.[44] Amanda L Hughes, Steve Peterson, and Leysia Palen. 2014. Social Media inEmergency Management.
Issues in Disaster Science and Management: A CriticalDialogue Between Scientists and Emergency Managers. FEMA in Higher EducationProgram (2014).[45] Marc-André Kaufhold and Christian Reuter. 2016. The Self-Organization ofDigital Volunteers Across Social Media: The Case of the 2013 European Floods inGermany.
Journal of Homeland Security and Emergency Management
13, 1 (2016),137–166.[46] Marc-André Kaufhold, Nicola Rupp, Christian Reuter, and Matthias Habdank.2020. Mitigating Information Overload in Social Media During Conflicts andCrises: Design and Evaluation of a Cross-Platform Alerting System.
Behaviour &Information Technology
39, 3 (2020), 319–342.[47] James M Kendra and Tricia Wachtendorf. 2003. Reconsidering Convergence andConverger Legitimacy in Response to the World Trade Center Disaster.
Researchin Social Problems and Public Policy
11, 1 (2003), 97–122.[48] ED Kuligowski, FT Lombardo, LT Phan, ML Levitan, and DP Jorgensen. 2013.National Institute of Standards and Technology (NIST) Technical Investigationof The May 22, 2011, Tornado in Joplin.
Missouri (2013).[49] Sumeet Kumar, Hakan Erdogmus, Bob Iannucci, Martin Griss, and João DiogoFalcão. 2018. Rethinking the Future of Wireless Emergency Alerts: A Comprehen-sive Study of Technical and Conceptual Improvements.
Proceedings of the ACMon Interactive, Mobile, Wearable and Ubiquitous Technologies
2, 2 (2018), 1–33.[50] Michael K Lindell and Ronald W Perry. 2003.
Communicating Environmental Riskin Multiethnic Communities . Vol. 7. Sage Publications.[51] Michael K Lindell and Ronald W Perry. 2012. The Protective Action DecisionModel: Theoretical Modifications and Additional Evidence.
Risk Analysis: AnInternational Journal
32, 4 (2012), 616–632.[52] Brooke Fisher Liu, Michele M Wood, Michael Egnoto, Hamilton Bean, JeannetteSutton, Dennis Mileti, and Stephanie Madden. 2017. Is a Picture Worth a ThousandWords? The Effects of Maps and Warning Messages on How Publics Respond toDisaster Information.
Public Relations Review
43, 3 (2017), 493–506.[53] Sophia B Liu, Leysia Palen, Jeannette Sutton, Amanda L Hughes, Sarah Vieweg,et al. 2008. In Search of the Bigger Picture: The Emergent Role of Online PhotoSharing in Times of Disaster. In
Proceedings of the Information Systems for CrisisResponse and Management Conference (ISCRAM) . Citeseer, 4–7. [54] Thomas Ludwig, Christoph Kotthaus, Christian Reuter, Sören Van Dongen, andVolkmar Pipek. 2017. Situated Crowdsourcing During Disasters: Managing theTasks of Spontaneous Volunteers Through Public Displays.
International Journalof Human-Computer Studies
102 (2017), 103–121.[55] Christopher B Mayhorn and Anne Collins McLaughlin. 2014. Warning The Worldof Extreme Events: A Global Perspective on Risk Communication for Naturaland Technological Disaster.
Safety Science
61 (2014), 43–50.[56] J. C. McCroskey, C. E. Larson, and M. L. Knapp. 1971. An Introduction to Inter-personal Communication. (1971).[57] MJ McGinley, Andrew Turk, and David Bennett. 2006. Design Criteria for PublicEmergency Warning Systems. (2006).[58] John D McGregor, Joseph P Elm, Elizabeth T Stark, Jen Lavan, Rita Creel, ChrisAlberts, Carol Woody, Robert Ellison, and Tamara Marshall-Keim. 2014.
Best Prac-tices in Wireless Emergency Alerts . Technical Report. Carnegie-Mellon UniversityPittsburgh PA Software Engineering Institute.[59] Marcelo Mendoza, Barbara Poblete, and Carlos Castillo. 2010. Twitter UnderCrisis: Can We Trust What We RT?. In
Proceedings of the First Workshop on SocialMedia Analytics . 71–79.[60] Denis S Mileti. 1995. Factors Related to Flood Warning Response. In
US-ItalyResearch Workshop on the Hydrometeorology, Impacts, and Management of ExtremeFloods . Citeseer, 1–17.[61] Dennis S Mileti and Lori Peek. 2000. The Social Psychology of Public Responseto Warnings of a Nuclear Power Plant Accident.
Journal of hazardous materials
75, 2-3 (2000), 181–194.[62] Dennis S Mileti and John H Sorensen. 1990. Communication of Emergency PublicWarnings.
Landslides
1, 6 (1990), 52–70.[63] Helena Mitchell, Jeremy Johnson, and Salimah LaForce. 2010. The Human Side ofRegulation: Emergency Alerts. In
Proceedings of the 8th International Conferenceon Advances in Mobile Computing and Multimedia . 180–187.[64] Linda KS Moore. 2008. The Emergency Alert System (EAS) and All-HazardWarnings. Congressional Information Service, Library of Congress.[65] Kaya Naz and Helena Epps. 2004. Relationship Between Color and Emotion: AStudy of College Students.
College Student J
Proceedings of theSIGCHI Conference on Human factors in Computing Systems . 727–736.[68] Leysia Palen and Sarah Vieweg. 2008. Emergent, Widescale Online Interactionin Unexpected Emergency Events: Assistance, Alliance & Retreat. In
Proceedingsfrom the CSCW Conference, November . 8–12.[69] Leysia Palen, Sarah Vieweg, Sophia B Liu, and Amanda Lee Hughes. 2009. Crisisin a Networked World: Features of Computer-Mediated Communication in theApril 16, 2007, Virginia Tech Event.
Social Science Computer Review
27, 4 (2009),467–480.[70] Gabriele Paolacci, Jesse Chandler, and Panagiotis G Ipeirotis. 2010. RunningExperiments on Amazon Mechanical Turk.
Judgment and Decision Making
5, 5(2010), 411–419.[71] Linda Plotnick and Starr Roxanne Hiltz. 2016. Barriers to Use of Social Media byEmergency Managers.
Journal of Homeland Security and Emergency Management
13, 2 (2016), 247–277.[72] Daniela Pohl, Abdelhamid Bouchachia, and Hermann Hellwagner. 2015. SocialMedia for Crisis Management: Clustering Approaches for Sub-Event Detection.
Multimedia Tools and Applications
74, 11 (2015), 3901–3932.[73] Jason Porter. 2020. FirstNet: Reaching Rural and Remote Parts of America.https://about.att.com/innovationblog/2020/05/fn_rural_connectivity.html. (May2020). (Accessed on 09/15/2020).[74] Yan Qu, Philip Fei Wu, and Xiaoqing Wang. 2009. Online Community Responseto major disaster: A Study of Tianya Forum in the 2008 Sichuan Earthquake. In
Proceedings of the Information Systemsfor Crisis Response and Management Conference (ISCRAM) .[77] Christian Reuter and Thomas Spielhofer. 2017. Towards Social Resilience: AQuantitative and Qualitative Survey on Citizens’ Perception of Social Media inEmergencies in Europe.
Technological Forecasting and Social Change
121 (2017),168–180.[78] Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake ShakesTwitter Users: Real-Time Event Detection by Social Sensors. In
Proceedings of the19th International Conference on World Wide Web . 851–860.[79] Steven M Schneider and Kirstein A Foot. 2002. The Web After September 11.
Oneyear later: September 11 and the Internet. Pew Internet and American Life ProjectReport (2002).
80] Irina Shklovski, Moira Burke, Sara Kiesler, and Robert E. Kraut. 2008. Use ofCommunication Technologies in Hurricane Katrina Aftermath. In
Position Paperfor the HCI for Emergencies Workshop at SIGCHI .[81] Irina Shklovski, Leysia Palen, and Jeannette Sutton. 2008. Finding CommunityThrough Information and Communication Technology in Disaster Response. In
Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work .127–136.[82] Mario Silic and Andrea Back. 2013. Information Security and Open Source DualUse Security Software: Trust Paradox. In
IFIP International Conference on OpenSource Systems . Springer, 194–206.[83] Tomer Simon, Avishay Goldberg, and Bruria Adini. 2015. Socializing in Emergen-cies — A Review of the Use of Social Media in Emergency Situations.
InternationalJournal of Information Management
35, 5 (2015), 609–619.[84] Burrhus F Skinner. 1963. Operant Behavior.
American psychologist
18, 8 (1963),503.[85] John H Sorensen. 2000. Hazard Warning Systems: Review of 20 Years of Progress.
Natural hazards review
1, 2 (2000), 119–125.[86] John H Sorensen and Barbara Vogt Sorensen. 2007. Community Processes:Warning and Evacuation. In
Handbook of Disaster Research . Springer, 183–199.[87] Sprint. 2020. Sprint Data Calculator. https://shop.sprint.com/content/datacalculator/index2.html. (2020). (Accessed on 04/16/2020).[88] Kate Starbird and Leysia Palen. 2011. "Voluntweeters" Self-Organizing by DigitalVolunteers in Times of Crisis. In
Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems
Human Communication Research
39, 2 (2013), 230–251.[91] Robert W Stoddard, Joseph P Elm, Jim McCurley, Sarah Sheard, and TamaraMarshall-Keim. 2014.
Wireless Emergency Alerts: Trust Model Technical Report .Technical Report. CARNEGIE-MELLON UNIV PITTSBURGH PA SOFTWAREENGINEERING INST.[92] Hayden Strickling, Morgan Faye DiCarlo, M Ehsan Shafiee, and Emily Berglund.2020. Simulation of Containment and Wireless Emergency Alerts Within TargetedPressure Zones for Water Contamination Management.
Sustainable Cities andSociety
52 (2020), 101820.[93] Jeannette Sutton, Emma S Spiro, Britta Johnson, Sean Fitzhugh, Ben Gibson, andCarter T Butts. 2014. Warning Tweets: Serial Transmission of Messages Duringthe Warning Phase of a Disaster Event.
Information, Communication & Society
17, 6 (2014), 765–787.[94] Jeannette N Sutton, Leysia Palen, and Irina Shklovski. 2008. Backchannels onthe Front Lines: Emergency Uses of Social Media in the 2007 Southern CaliforniaWildfires. (2008).[95] Paula Thorley, Elizabeth Hellier, and Judy Edworthy. 2001. Habituation Effectsin Visual Warnings.
Contemporary Ergonomics (2001), 223–230.[96] Kathleen J Tierney, Michael K Lindell, and Ronald W Perry. 2002. Facing theUnexpected: Disaster Preparedness and Response in the United States.
DisasterPrevention and Management: An International Journal (2002).[97] Cristen Torrey, Moira Burke, Matthew Lee, Anind Dey, Susan Fussell, and SaraKiesler. 2007. Connected Giving: Ordinary People Coordinating Disaster Reliefon the Internet. In . IEEE, 179a–179a.[98] Ralph H Turner, Lewis M Killian, et al. 1957.
Collective behavior
Journal ofHomeland Security and Emergency Management
11, 3 (2014), 309–315.[101] Sarah Vieweg, Amanda L Hughes, Kate Starbird, and Leysia Palen. 2010. Mi-croblogging During Two Natural Hazards Events: What Twitter May Contributeto Situational Awareness. In
Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems . 1079–1088.[102] Sarah Vieweg, Leysia Palen, Sophia B Liu, Amanda L Hughes, and Jeannette NSutton. 2008.
Collective Intelligence in Disaster: Examination of the Phenomenon inthe Aftermath of the 2007 Virginia Tech Shooting . University of Colorado Boulder,CO.[103] Emily Wax-Thibodeaux. 2017. ’Cajun Navy’ Races from Louisiana to Texas,Using Boats to Pay it Forward.
Washington Post (2017).[104] Joanne I White, Leysia Palen, and Kenneth M Anderson. 2014. Digital Mobiliza-tion in Disaster Response: The Work & Self-Organization of Online Pet Advocatesin Response to Hurricane Sandy. In
Proceedings of the 17th ACM Conference onComputer Supported Cooperative Work & Social Computing . 866–876.[105] Michael S Wogalter. 2006. Communication Human Information Processing(C-HIP) Model. (2006). [106] Michael S Wogalter, David M DeJoy, and Kenneth R Laughery. 1999. OrganizingTheoretical Framework: A Consolidated Communication-Human InformationProcessing (C-HIP) Model.
Warnings and Risk Communication (1999), 15–23.[107] Michael S Wogalter and Christopher B Mayhorn. 2005. Providing CognitiveSupport with Technology-Based Warning Systems.
Ergonomics
48, 5 (2005),522–533.[108] Michele M Wood, Dennis S Mileti, Hamilton Bean, Brooke F Liu, JeannetteSutton, and Stephanie Madden. 2018. Milling and Public Warnings.
Environmentand Behavior
50, 5 (2018), 535–566.[109] Carol Woody and Robert Ellison. 2014.
Maximizing Trust in the Wireless Emer-gency Alerts (WEA) Service