Multiple Purposes, Multiple Problems: A User Study of Consent Dialogs after GDPR
PProceedings on Privacy Enhancing Technologies ; 2020 (2):481–498
Dominique Machuletz* and Rainer Böhme
Multiple Purposes, Multiple Problems:A User Study of Consent Dialogs after GDPR
Abstract:
The European Union’s General Data Pro-tection Regulation (GDPR) requires websites to ask forconsent to the use of cookies for specific purposes . Thisenlarges the relevant design space for consent dialogs.Websites could try to maximize click-through rates andpositive consent decision, even at the risk of users agree-ing to more purposes than intended. We evaluate a prac-tice observed on popular websites by conducting an ex-periment with one control and two treatment groups( N = 150 university students in two countries). Wehypothesize that users’ consent decision is influencedby (1) the number of options, connecting to the the-ory of choice proliferation, and (2) the presence of ahighlighted default button (“select all”), connecting totheories of social norms and deception in consumer re-search. The results show that participants who see a de-fault button accept cookies for more purposes than thecontrol group, while being less able to correctly recalltheir choice. After being reminded of their choice, theyregret it more often and perceive the consent dialog asmore deceptive than the control group. Whether usersare presented one or three purposes has no significanteffect on their decisions and perceptions. We discuss theresults and outline policy implications. Keywords: web privacy, user study, consent, cookies,controlled experiment, choice proliferation, deception,privacy paradox, privacy by design, dark patterns
DOI 10.2478/popets-2020-0037Received 2019-08-31; revised 2019-11-04; accepted 2019-11-25.
The European Union’s General Data Protection Reg-ulation (GDPR) [1] came into force in May 2018. Itstipulates that data controllers (e. g., website operators) *Corresponding Author: Dominique Machuletz:
Inde-pendet, E-mail: [email protected]. Work carried out whileat the University of Münster, Germany.
Rainer Böhme:
University of Innsbruck, Austria, E-mail:[email protected] must have a legal basis for the collection and process-ing of personal data . One legal basis is consent : datasubjects (users) agree to the data processing for spe-cific purposes . While these requirements are not new, the GDPR’s threat of sanctions and more effective en-forcement led many website operators to rethink theircookie practices, or at least ensure compliance by ob-taining consent before using cookies for purposes thatare not covered by other legal bases [6].Web cookies are key–value pairs stored on the clientdevice for purposes ranging from session tracking, userrecognition, counting unique users, third-party trackingto profiling and targeted advertising [7]. As every cookiecan in principle serve many purposes at the same time,and necessary cookies not carrying any personal datado not require consent, a user generally cannot verify ifa website complies with the agreed purposes.Common methods for asking web users to decideon the cookie settings are pop-up banners or dialogsthat appear at the beginning of each user’s first visit ofa website. They typically include a notice on the datacollection that asks users whether they consent to (partsof) the practices. Systematic longitudinal measurementsare lacking, but one study reports that 62% of the web-sites in its sample used such notices in June 2018 [8].It also shows that the implementation—specifically, thegranularity of control offered to users—differs betweenwebsites. The authors of [8] conjecture that many cookiebanners and dialogs are not very usable, and they pro-vide early evidence from a series of field experimentswith several variants of cookie banners placed on onewebsite [9].Independently, in November 2018, we noticed thatsome cookie consent dialogs seem to be designed tonudge users into accepting all displayed purposes. (Thisobservation is meanwhile documented in the litera-ture [e. g., 10]). It is understandable that the industryfinds cookie banners disadvantageous as they add fric- The principles of consent and purpose binding appear in dataprotection laws since the 1970s. The specific case for web cookieswas harmonized in the EU through the 2009 update of the ePri-vacy Directive [2–4], but respected by only one in two websites,according to a recent measurement study [5]. a r X i v : . [ c s . H C ] M a r User Study of Consent Dialogs after GDPR tion to the user experience and might limit the abilityto track users on and across sites. Hence, there is amplebusiness interest in minimizing friction and maximiz-ing positive consent decisions by optimizing interfacedesign. Common design elements in the dialogs we ob-served (see Figure 1 for examples) are checkboxes forseveral purposes of data processing as well as buttonsto either select all purposes at once or to confirm themanual selection before accessing the website.We identify two features that might compromise us-ability. First, the highlighted button automatically ac-cepts all purposes, regardless of whether any checkboxeshave (or have not) been selected before the button isclicked. This button does not increase the users’ choiceoptions, but might rather “trick” them into acceptingall purposes without actively selecting them. Second,the number of selectable purposes may influence users’choice as former studies in the field of psychology re-vealed that a high number of alternatives has adverse ef-fects on individuals’ decision making [11, 12]. This phe-nomenon has also been demonstrated in the context ofprivacy settings [13].These considerations call for a user study, which wehave carried out in the form of a controlled classroomexperiment and report in this paper. Our general re-search question is:
How do users react to design features of multi-purposeconsent dialogs on the web in terms of actual behaviorand stated perceptions?
The rest of this paper is structured as follows.First, we review the literature on consent dialog de-signs in Section 2. Section 3 recalls the theoreticalbackground on choice proliferation and deception, fromwhich we derive our hypotheses. The instrument andthe administration of the controlled experiment is de-scribed in Section 4. The results of our hypothesis tests(Section 5) precede the discussion of our findings (Sec-tion 6). We conclude with some recommendations forinterface design and policy development in Section 7.
We first summarize the legal requirements for GDPR-compliant consent dialogs in Section 2.1, before we re-view the literature on engineering solutions for specify-ing privacy preferences (with emphasis on the purpose)in Section 2.2.
Fig. 1.
Examples of real-world cookie consent dialogs that mo-tivated this study: a US technology news website (top) and aGerman airline website (bottom). Both dialogs are blocking andall items are unchecked initially (opt-in).
Article 7 of the GDPR describes the requirements of le-gitimate consent: it needs to be (1) freely given, (2) un-ambiguous, (3) informed, and (4) withdrawable at anytime [1]. In the event of a dispute, the data controllermust prove that the subject has truly given consent tothe processing practices [14]. Specifically, consent mustbe communicated “by a statement” or a “clear affirma-tive action” [1]. Regarding the clearness of this action,ticking a checkbox on a website is considered an ac-ceptable form, while passiveness or predefined defaultsettings that are not actively declined by the subject donot qualify as consent decisions. The European Courtof Justice has just reconfirmed this interpretation [15].If personal data is collected for more than one pur-pose, data subjects need to be informed and providedwith distinct opt-in choices for every purpose [1]. Be-sides stating these principles, the GDPR intentionallydoes not specify any design template or rules, and thusleaves the exploration of the design space for consentdialogs to the market participants.For the specific case of web cookies, the market hasadopted a rough classification of purposes into strictly necessary (which presumably do not require consent), preferences , statistics , and marketing (which includesthird-party tracking) [16][Fig. 4 (d) of 8]. This mirrors User Study of Consent Dialogs after GDPR the approach taken in a user survey by Ackerman etal. as early as in 1999 [17]. The authors distinguishbetween cookies for “customized service”, “customizedadvertising”, and “customized advertising across manywebsites”. They report a decreasing willingness to agree,from 96% to 77% for users classified as “marginally con-cerned” about privacy, and from 43% to 14% for so-called “privacy fundamentalists” in a sample of 381 USinternet users (Fig. 3 of [17]). While the former clas-sification is implemented in popular content manage-ment systems, it is by no means the only way of definingpurposes. As a result, website operators who can affordspecialized lawyers enjoy more freedom in the design ofconsent dialogs. Others follow common practices in or-der to minimize legal uncertainty, or to comply with theterms of services of third parties who provide content orcode to embed (e. g., Google Analytics). The bulk of theburden lands on privacy-aware users, who need to un-derstand and navigate each site’s specific model.
Researchers have studied ways to effectively informusers about privacy policies and seek their consent todata processing long before the GDPR. For example,a CHI paper from 2001 provides design recommenda-tions for cookie consent dialogs after evaluating designchanges of the then popular browsers over time [18].The authors criticize browsers in which users had to in-vest great effort when searching for an alternative to the“accept all cookies” default setting. Consent dialogs arespecific forms of privacy notices, a topic so profoundlyresearched that Schaub et al. [19] saw the need to sys-tematize the literature. According to their proposed tax-onomy, the design space can be divided along the di-mensions timing , channel , modality , and control . In thefollowing, we use this terminology when applicable.Bergmann [20] addresses the problem of complexand incomprehensible privacy choices. The author sug-gests a design for generic predefined privacy settings( timing: at setup ) that are summarized in a limited num-ber of categories. He defines four privacy profiles thatdiffer in the acceptance level of transmission and pro-cessing of personal data. The suggested solution aimsat decreasing the user’s cognitive effort when selectingsuitable privacy settings, but we are not aware of anyempirical study to evaluate this approach.Pettersson et al. [21] discuss a similar design withpredefined settings. They suggest the adoption of a pri- vacy management system that asks users for consent be-fore transmitting their personal data ( timing: at setup ).Moreover, users’ acceptance of data processing practicescan be configured in advance and apply to future web-site visits. However, the authors point out that design-ing consent forms that are applicable to a large num-ber of different websites is a complex task. It mightrequire compromises on usability as many different set-tings need to be offered by the system. More specifically,Pettersson et al. [22] propose design paradigms that in-clude suggestions for consent dialogs. Incorporating rec-ommendations by data protection commissioners and le-gal experts as well as standards established in the PISAproject [23], the authors present a dialog window withseveral mandatory and optional fields, an expandableprivacy notice, information about the data recipients,and an “I agree” button. They also propose methodsto overcome habituation by, for instance, using drag-and-drop actions for consent. The authors qualitativelyevaluate their usability tests and find that some usersdid not fully trust the privacy management system.In a follow-up study, Bergman [24] empirically ex-plores how to successfully communicate websites’ pri-vacy policies to users. Specifically, he compared a con-ventional interface for online forms to an extended ver-sion with additional explanations of privacy informa-tion that pops up in tooltips (so-called “privacy bars”)while filling the form ( timing: just-in-time ). He findsthat participants who saw the extended version weresignificantly more likely to be aware of the policy thanthe control group. But he did not measure the cost ofthis sophistication in terms of response time or frictionsto usability. Moreover, the screenshots of the extendeddialog (Fig. 2 of [24]) bears a risk of information over-load. Finally, as the dialog was only tested on desktopcomputers, it remains unclear how this information canbe perceived on small mobile displays.Tiny displays raise the need for non-interactiveforms of privacy preference negotiations. An established(but meanwhile discontinued) standard for express-ing privacy preferences on the web is P3P. The stan-dard lets websites communicate their privacy policiesin machine-readable XML format ( modality: machine-readable ). Each XML element represents a component,such as the type of data, the purpose for data collec-tion, and third party recipients [25]. A language calledAppel has been developed for enabling users to expresstheir privacy preference through predefined rules ( tim-ing: at setup ), so that automated privacy decisions canbe based on the user’s specific settings [26]. User Study of Consent Dialogs after GDPR
A recent approach towards facilitating informed andGDPR-compliant user consent is proposed by Ulbrichtand Pallas [27]. The authors present a privacy preferencelanguage, called YaPPL, that is targeted on consent fordata practices on the Internet of Things (IoT). For thedevelopment, they analyze legal requirements for con-sent and transform them to technical standards thatsuit IoT devices ( modality: machine-readable, channel:primary or secondary ). The language is prototypicallytested in real-world IoT applications. The authors hopethat the underlying approach of YaPPL will also be im-plemented in IoT applications that do not have to meetthe standards of GDPR, but require a technical repre-sentation of users’ privacy preferences.Dissatisfied by the observation that many users tendto ignore notices with privacy impact [28, 29], perceivethem as a threat to their privacy [30], and have been ha-bituated to “click away” consent dialogs [31], several re-searchers investigated how to design more effective pri-vacy notices. For instance, Felt et al. [32] propose designguidelines that aid mobile application developers in ap-propriately asking for permissions. They find that morethan half of all permission requests can be automatedwhile 16% require consent dialogs. By minimizing thenumber of runtime consent dialogs, the authors intendto decrease the required user attention. While techni-cal permissions differ from legal purposes in several re-spects, it is conceivable that similar effects also apply topurposes. To our knowledge, this link is still unexplored.Most closely related to the present work is the con-current effort by Utz et al. [9], which draws on data froma field experiment exploring the design space for cookiebanners. Both works share the experimental method, in-quiry period (Q1/2019), and language (German). Someof their treatments and findings relate to our researchquestions. We shall comment on specific similarities anddifferences where it applies. The most salient differencesbetween our colleagues’ and this work are the mode ofdata collection (field vs lab), the type of cookie noticestudied (non-blocking banner vs blocking dialog), theemphasis of the analysis (behavioral traces vs statedattitudes and beliefs), and the context of scientific dis-covery (inductive vs deductive). Both works leave manyquestions open, indicating that we are at the beginningof a relevant and potentially fruitful strand of research.The works discussed in this section are selectedpieces of the literature. They are representative in thatthe field focuses on technical and human aspects inmany facets, but (with a few exceptions) it largely ig-nores economic interests [33]. In practice, we must ex-pect that businesses use the flexibility in the design of consent dialogs for their own interest by maximizingdata disclosure instead of helping users to make privacy-conscious decisions.
User studies integrate better into the body of knowledge(and, arguably, generalize better), if the hypothesizedcausal links are derived from established theory. There-fore, we revisit relevant theories for explaining the effectof the two characteristic components in the consent di-alogs inspiring this work (Fig. 1). Specifically, we reviewchoice proliferation in Section 3.1 to reason about thenumber of purposes, and social norms in combinationwith deception in Section 3.2 to predict the effect of thedefault button. Then, we formulate our hypotheses inSection 3.3.
Choice proliferation is a line of research in psychologythat analyzes the influence of an increasing number ofalternative choices on the human decision-making pro-cess. The phenomenon that more options result in neg-ative effects, such as dissatisfaction, has mainly beenstudied in a marketing context [34, 35] and is sometimesreferred to as “too much choice”, “tyranny of choice”,or “choice overload”.As pointed out by Johnson et al. [35], two main as-pects have to be considered when evaluating the numberof choices offered. On the one hand, a high number of al-ternatives increases the cognitive load while causing in-dividuals to feel stressed, overwhelmed, and more likelyto regret one’s decision [11, 12]. On the other hand, thelikelihood that the choice suits the individual’s prefer-ences increases when more options are given. Thus, thepractical challenge is to find the right balance.A few works investigate the effect of increasingprivacy choices on users’ decision making. Korff andBöhme [13] experimentally study the influence of choiceamount and choice structure in the context of privacypreferences on a fictitious business networking website.They find that participants who were confronted witha larger number of privacy settings to chose from wereless satisfied with their choice and experienced more re-gret. The works by Knijnenburg et al. [36] and Tanget al. [37] investigate the number of privacy choices inthe context of mobile location sharing. Both studies find
User Study of Consent Dialogs after GDPR that the structure of presented choices significantly im-pacts users’ tendency to disclose personal data. Utz etal. vary the number of choices of a cookie banner intheir field study, but they neither relate this treatmentto choice proliferation nor collect the relevant dependentvariables. Since the instrument confounds the numberof options with their type (5 categories, with one pre-selected, and 6 vendors; see Fig. 1 (d) and (e) in [9]), itis not easy to interpret the results. Krasnova et al. dis-cuss the effect of an increasing amount of informationitems in mobile applications’ permission requests [38].The results of their experiment show that users tendto be more concerned if the permission request asks formore information items. This aspect of choice prolifer-ation seems to be specific to privacy, because optionsin privacy dialogs often remind users of threats. Thisis rarely the case in the marketing literature on choiceproliferation, where the typical study varies the numberof forms of a retail product (e. g., flavors of jam).Like almost any social science theory, choice prolif-eration is not undisputed. Critics argue that more op-tions can lead to higher satisfaction since one’s individ-ual needs can be matched more precisely [39]. Moreover,more choice enables easier comparison of differences,which leads to more confident decision making [40].Broadly related to the number of options is thenumber of occasions for privacy decisions. Böhme andGrossklags [41] discuss the averse effects of escalatingtoo many decisions to users. They postulate that onlythe most important decisions should be made by users,so that they do not get habituated to ignore noticesas a consequence of too high complexity. Several em-pirical studies support this interpretation. For example,the null result in an experiment on more or less verbosevariants of the well-known consent dialog of FacebookConnect is attributed to habituated ignorance [42].Our study connects to the literature on choice pro-liferation by experimentally varying the number of pur-poses. We adapt established constructs to measure per-ceived task difficulty and regret.
The concept of deception is often described as beingmisled due to unfair practices and can occur in manycontexts when interests of different parties collide [43].Deception has been studied in several areas such as mar-keting [44–47] and organizational research [48–50]. Adeceptive practice is being conducted if the targeted in-dividual receives false information that lead to false im- pressions of a situation. Such false impressions may trig-ger decisions or opinions that would have been formedin a different way without the deceiving act.However, deception is not always based on lying, asit may also comprise purposeful evocation of specific ac-tions by the targeted party; for instance, by increasingthe complexity of information, or by making use of be-havioral clues or clue patterns. A study by Nochensonand Grossklags [46] investigates how users of web shopsare tricked into falling for post-transaction marketingtactics due to specific design elements in notices. In anexperiment, they test the purchasing behavior of morethan 500 users and find that above 40% signed up unin-tentionally for an extra service with costs. The authorsfind that opt-in and opt-out default buttons significantlyimpact the users’ tendency to fall for the trick.Citing usability guidelines [51], Böhme andKöpsell [31] underline that the default option shouldinclude the most frequently selected settings so that in-experienced users can be assisted by the decision of themajority. In this sense, default buttons can be inter-preted as a descriptive social norm. However, as high-lighted in a study on default privacy settings on socialmedia websites, the preset or default options are oftenvery disclosing and might not reflect the majority ofusers’ privacy preferences [52]. It seems that the defaultbutton has mutated from a usability tool that improvesefficiency when selecting the typical choice to a strategictool that supports the interests of the system designer.For several decades, scholars in the behavioral sci-ences have identified and quantified cognitive and socialeffects, some of which cause successful persuasion or de-ception [53]. A shared objective in these disciplines wasto isolate effects, which required substantial effort giventhat stimuli to human subjects often confound manyfactors. By contrast, the recent literature that criticizesthe deliberate exploitation of these biases in favor of thedesigner typically looks at bundles of features as theyappear in practice [54]. The term “dark pattern” [55],coined in 2010, classifies designs that trick users intomaking decisions they do not mean to make. Bösch etal. [56] were among the first to systematize dark pat-terns commonly adopted for privacy invasions. For in-stance, users typically do not read privacy notices com-pletely [57] and often intuitively accept the presentedconditions. This behavior can be exploited by hidingundesirable terms in privacy notices. Mathur et al. [58]structure common characteristics of dark patterns alongfive dimensions: (1) asymmmetric (unequal emphasis orobstacles for specific choices), (2) covert (hidden inter-face design choices), deceptive (induce false beliefs), (3)
User Study of Consent Dialogs after GDPR hides information (obscure or delay the communicationof relevant information), and (5) restrictive (limitationof choices). The authors specifically name cookie con-sent dialogs which make use of a highlighted “accept”button as an example for the asymmmetric dimension.In the context of the GDPR, one could argue thattactics involving increased complexity, hidden informa-tion, or unwanted default settings—if effective—violatethe requirements for clear and informed consent. Ourstudy adds empirical evidence on the effectiveness ofthese tactics in the specific context. We vary the pres-ence of a potentially misleading default button and mea-sure perceived deception, unlike the wealth of studiesthat quantify this bias by merely observing the behav-ioral reaction to default buttons. Since decisions inthe privacy context often involve high cognitive load,we devise a combined (but not confounded) experimentwith choice proliferation. This allows us to interpretperceived difficulty and response time—both proxies forcognitive load—in relation to perceived deception.
Against the backdrop of the features in consent dialogsused by popular websites and the underlying theoreticalconsiderations, we postulate four hypotheses: H1 If consent dialogs include a highlighted default but-ton that selects all purposes, users effectively con-sent to more purposes than without this button. H2 If consent dialogs include a highlighted default but-ton that selects all purposes, users (a) regret their decision more and (b) perceive the website as more deceptivethan without this button, after being informedabout the purposes they effectively consented to. H3 If consent dialogs present multiple purposes, usersrequire more effort than for dialogs with a singlepurpose, as indicated by longer response times. H4 If consent dialogs include multiple purposes, usersperceive the task as more difficult than reacting todialogs with a single purpose.In the hypotheses and the following, we shall use theterm “effective consent” to refer to the consent state-ment recorded by the website, independent of whetherthis corresponds the user’s true intention. The default effect is in the order of 5 %-pts. for a consentdialog where about one of two participants agrees [31].
To test the proposed hypotheses, we conducted a con-trolled experiment. We describe and justify the instru-ment in Section 4.1, then report from our pretests (Sec-tion 4.2) and the survey administration (Section 4.3).Ethical considerations are discussed in Section 4.4. De-scriptive statistics are presented in Section 4.5.
The survey instrument has two main components: afunctional mock-up website offering flight search, andan exit questionnaire. As experimental factor, the mock-up randomly presents the user one of the three consentdialogs depicted in Figure 2. When categorizing thesedialogs along the dimensions proposed by Schaub etal. [19], they constitute privacy notices which appearat setup ( timing ), in the primary channel , as visualpop-ups ( modality ) that include a blocking control . Wecopied the three purposes (statistics, comfort, personal-ization) from the airline website in verbatim in order tomaximize external validity, noting that they differ fromthe convention discussed in Section 2.2. Users couldlearn more about the purposes by clicking on a smallroll-down button labelled “show details” (see screenshotin Fig. 10 in the Appendix). Accordingly, comfort cor-responds to prefereces , and personalization to market-ing , however without an indication whether this includesthird-party tracking.The treatment of the first group (T1) is a deceptivedialog, which closely resembled the one we saw on theGerman airline website (cf. Figure 9). It contains an ex-planation text about different cookie settings, three se-lectable purposes with initially unchecked checkboxes,an expandable part providing more details about thecategories, and two buttons. The first button with thetext “Select all and confirm” stands out due to its yellowcolor. The second button is colorless and says “Confirmselection” in gray font. If the yellow button is clicked,the user (effectively) consents to all three purposes, re-gardless of which boxes are checked. In contrast, a clickon the second button only confirms the settings thathave actively been selected by the user.The second treatment (T2) differs in the reductionof selectable categories. Specifically, it only includes the personalization purpose. Pretests have shown that per-sonalization is perceived as the most sensitive purpose,thus we deemed it plausible to make this purpose op- User Study of Consent Dialogs after GDPR tional. We could confirm this post-hoc: only 23% ofthe users in the control group consent to personaliza-tion , versus 35% for comfort and 46% for statistics . Theresults of Utz et al. [9] corroborate this further. Ar-guable, the T2 dialog appears somewhat artificial, butit was the best way we could think of reducing the num-ber of choices without changing the dialog to a yes/noquestion. We could not spot any indication that usersperceived this dialog as odd in the responses to an open-ended question in the exit survey.In contrast to the two treatments, the control groupdid not see a highlighted default button. The controldialog offers the same three purposes as observed in re-ality. We refrained from presenting a version with onepurpose and no default button for the lack of hypotheseson potential interaction effects, and to increase the num-ber of subjects in the interesting three groups. There-fore, our study technically combines two × experi-ments with one overlapping group rather than realizinga complete × design.We decided against additional treatments with opt-out (i. e., where purposes are pre-selected) because theyare almost certainly not compliant with the GDPR [15].For the same reasons, we see little prospect for non-blocking cookie banners if the website to some extentdepends on consent as the legal basis to process personaldata. For comparison, Utz et al. [9] test two opt-outconditions in their field study of non-blocking banners.The actual flight search website has a simplistic de-sign and only contains text fields and date selectorsfor the search input. To increase realism, some “spe-cial offers” for specific destinations are depicted next toa photo of the respective city. These measures were in-tended to draw the focus away from the cookie dialog.The participants’ interaction on the website is capturedand continuously transmitted to our server. This allowsus to analyze response time, click trajectories, and pos-sible dropouts post-hoc.We measure the participants’ perceptions of thewebsite in an exit questionnaire. At first, participantsare asked to freely list positive and negative aspects ofthe website. Thereafter, they should recall their cho-sen cookie settings in the dialog; first in free-text formand followed by closed questions. Besides general ques-tions on the cookie dialog, four established constructsare measured through multi-item scales. Such scalesare common in psychometrics to attenuate the mea- See Fig. 5 (1a) of [9], although the precision is low and thebaseline not comparable.
Table 1.
Constructs and corresponding items.
Item Item text (translated from German)
Perceived Deception (PDE)
PDE1 When it comes to cookie settings, the website isdishonest towards its users.PDE2 The website tries to mislead users towards selectingcookie settings which they do not intend to select.PDE3 The website makes use of misleading tactics so thatusers select cookie settings which they do not in-tend to select.
Perceived Difficulty (PDI)
PDI1 It was incomprehensible to select cookie settings.PDI2 It was frustrating to select cookie settings.PDI3 ↔ It was easy to select cookie settings.
Regret (RE)
RE1 I regret my choice of cookie settings.RE2 I would change my cookie settings if it was possible.RE3 ↔ I am satisfied with my choice of cookie settings.
Privacy Attitudes (PA)
PA1 It is important for me to protect my privacy online.PA2 If websites use cookies, my online privacy is im-paired.PA3 I am concerned about my online privacy being im-paired by website cookies.
Items marked with ‘ ↔ ’ use inverted scales. surement error of individual items. All construct itemsare reported in Table 1. Answers were collected on 5-point rating scales with semantic anchors “strongly dis-agree” (1) and “strongly agree” (5). Perceived deception(PDE) is assessed using three (of originally four) itemsby Román [44], adapted to the context of our study. Additionally, we measure perceived difficulty (PDI), pri-vacy attitudes (PA), and regret (RE). RE is measuredtwice in the questionnaire: before and after remindingthe participants of their effective cookies settings.
Two pretests were conducted in order to assess the clar-ity of the instructions and survey questions. First, wecarried out two one-on-one tests using verbal probing and think aloud techniques. Specifically, test subjectswere asked to express their thought process and poten-tial obstacles while going through the survey. Since wefound that it is confusing to first open a link with acookie dialog, and then receive the flight search task, The fourth item was dropped because it was too specific tothe domain of online shopping.
User Study of Consent Dialogs after GDPR
Deception (T1) Reduced choice (T2) Control
Fig. 2.
Variations of consent dialogs shown in the study. All dialogs are blocking. The participants saw German versions (see Fig. 8). we decided to rearrange the instructions. This way, theparticipants are even more focused on flights than oncookies before visiting the website.In order to simulate the actual survey environmentin a lecture hall, the second pretest was conducted with20 Austrian undergraduate students in a computer lab.This way, we were able to estimate the required time foreach survey step. During the test, we observed that sev-eral test subjects glanced at their neighbors’ screens andtalked to one another while completing the survey. Asthis behavior might reduce the data quality, we addedthe appeal to work quietly and by oneself to the instruc-tions. Additionally, we found and fixed a bug concerningthe collection of timestamps.
The data collection took place on two days in January2019 at the University of Innsbruck in Austria and theUniversity of Münster in Germany. All parts of the in-strument and the written and spoken instructions wereprovided in the local language (German). We report theoriginal wording of scale items and selected screenshotsin the appendix (Section 8) to facilitate error analysisand possible replication studies. The survey was admin-istered at the beginning of lectures attended by under-graduate computer science students, mainly in the firstyear. Figure 3 summarizes the steps of the data collec-tion, along with the order of the measured constructs.During the briefing, we informed the participantsthat their data will be held confidential and cannot be linked to their identity. We also pointed out the vol-untary nature of participating in the study and askedthem to conscientiously follow the instructions withoutinteracting with one another. We communicated thatthe scope of the study is about the user experience offlight search websites, without mentioning the focus oncookie notices or privacy.In the second step, participants were given the taskto search for a flight with a specific departure, destina-tion, and time. Then, we provided the link to the flightsearch website that is described above. When visitingthe link, one of the three cookie dialogs depicted in Fig-ure 2 was randomly assigned to each participant. Afterreacting to the dialog and entering the flight search, amodal window appeared, which asked to wait for furtherinstructions.When the vast majority had reached this step, akey combination for opening the questionnaire was dis-played on the lecture hall’s main projector. This way,participants started answering the questions almost si-multaneously. It took the average participant 6’ 34” tocomplete the questionnaire while were done after9’ 06”.In the debriefing, we informed the participantsabout the topic of the study and showed them screen-shots of cookie notices used by real websites. We ran theclassroom experiment exactly once at each university,one in Austria and one in Germany, thereby minimizingthe likelihood that earlier participants could tell laterparticipants the true purpose of the study.
User Study of Consent Dialogs after GDPR
Briefing Visit flightsearch websiteConsent dialog version(T1, T2, or control) Explanation of participant’seffective consented purposesChoice of purposes Generalperceptionof website Recall chosenpurposes Constructitems:PDI, RE1 Constructitems: RE2,PDE, PAQuestionnaireConstruct measurements, perceptions, etc. Debriefing| |
Fig. 3.
Visualization of the study process with treatments (dotted boxes) and measurements (dashed boxes). Semaphores denote syn-chronization points: all participants in the classroom proceed to the next step simultaneously.
In fulfillment of approved ethical standards, we clearlycommunicated that participation is voluntary andanonymous. Respondents could skip questions they didnot want to answer. The search task itself and the sur-rounding stimulus material was chosen to not raise emo-tions or strong feelings. Independent of the participants’selected cookie settings, we did not store cookies andonly transmitted data to our servers that are relevantfor the research purpose.In not revealing the true purpose of our study rightaway, we applied deception ourselves as part of the re-search method. This is common practice and was in ac-cordance with the ethical oversight bodies at all uni-versities involved. The practice is deemed acceptablein particular because of the low probability of causingharm and the fact that we revealed the purpose of ourstudy in the debriefing, where we also provided contactinformation and offered the communication of results.The experiment caused an opportunity cost of 10minutes lost lecture time for everyone in the room, in-cluding about 8 students per session (15 altogether) whodid not participate in the experiment. To minimize theharm, we chose the beginning of a Q&A session thathad not used the entire allocated time in the previousyears. Moreover, the main reason why students did notparticipate was that they arrived late in class.
Table 2 reports descriptive statistics of our sample. Intotal, 164 students took part in the study whereof 158completed the survey. We deleted 8 records due to morethan four missing responses on critical construct items.The remaining 13 records with missing answers con-tained a total of 20 missing values, which were replaced by the mean of the observed item score. Consequently,the analysis uses 150 valid cases. As a consequence ofthe convenience sample, the ratio of female participantswas below 20%, which is typical for German-speakingcomputer science undergraduates.Even though we asked participants to use their lap-tops for completing the survey, we allowed those with-out one to chose another device they had at hand.While 42.7% followed the survey instructions on a screen
Table 2.
Descriptive statistics.
Item Number Fraction
All 150 . Group T1
50 33 . T2
48 32 . Control
52 34 . Location of the university
Austria
90 60 . Germany
60 40 . Screen width <
64 42 . . >
79 52 . Browser
Chrome
81 54 . Firefox
30 20 . Safari
29 19 . Other
10 6 . Knowledge about cookies
Self-reported knowledge
121 80 . Correctly described cookies
102 68 . Privacy measures (self-report)
Regularly deletes cookies
64 42 . Has cookies disabled
36 24 . Uses ad-blocker
113 75 . Uses anti-virus software
80 53 . User Study of Consent Dialogs after GDPR width below 500 pixels (i. e., likely smartphones), 52.7%had a screen width above 1000 pixels (i. e., likely lap-tops). Most participants opened the website on Chrome(54.0%); others used Safari (19.3%) or Firefox (20.0%).We did not observe noteworthy differences in results be-tween device types, browsers, or locations and thus re-frain from reporting breakdowns in the following.By asking whether participants know what browsercookies are, we find that 68.0% are able to provide acorrect explanation. Only 12.7% claim to know whatcookies are, but provided either no explanation or anincorrect one. The remaining 19.3% stated that theyhave no knowledge about cookies.On average, it took participants 11.8 seconds to re-spond to the cookie dialog (median: 7.3”). Only 8.7%expanded the dialog by clicking on “Show details”, and3.3% revised their initial choice by unselecting at leastone purpose. In total, 41.3% did not consent to anycookie purpose, while 30.7% accepted all purposes of-fered. Of all participants in the two treatment groups( n = 98 ), 56.1% clicked on the default button, whichresults in accepting all purposes regardless of whichpurposes were actively selected. Of these participants,34.5% ( n = 19 ) still selected at least one purpose.To evaluate the construct reliability of PDE, PDI,RE und PA, we examine internal consistency by cal-culating Cronbach’s α . As shown in Table 3, each con-structs’ Cronbach’s α value lies above 0.7, indicatingthat they are sufficiently consistent [59] and thus suit-able for further analysis. We also check if the constructscores are sufficiently close to a normal distribution tojustify the use of parametric inference statistics. Table 4shows Q-Q plots for all constructs and reports the re-sults of Kolmogorov–Smirnov (KS) tests for normality.Given that the deviations from normality are visiblycaused by the range limits only, no KS-test rejects thenull hypothesis at the 1% level, and the way we com-pute the scores cannot produce any outliers, we deem itsafe to report hypothesis tests with parametric t -tests.To err on the side of caution, we report p -values for thetwo-sided test although all our hypotheses are directed. We begin with the deductive hypothesis tests, beforewe investigate additional aspects in a quantitative ex-plorative way (Section 5.2).
Table 3.
Construct reliability.
Construct Cronbach’s α Mean Median SD
PDE .
79 3 .
52 3 .
67 1 . PDI .
73 2 .
82 2 .
67 1 . RE-before .
83 2 .
39 2 .
33 1 . RE-after .
74 2 .
62 2 .
33 1 . PA .
80 3 .
61 3 .
67 0 . Table 4.
Normality of construct distributions.
PDE PDI RE-before RE-after PA
Normal Q-Q plots for the range ± SD l lll ll l ll l l lll llll llllll lllll l llll l lll ll lll llll lll lll llll l llll ll lll lll ll lll lll ll llll lllll llll ll lll ll ll lllll llll ll llll llll l ll lll ll ll lll l ll lllll lllll l ll l l ll lll l l ll l ll lll ll ll lll ll llll lll ll ll lllllll l lll ll lll llll l lll lll ll ll llll lll ll lll ll ll ll lll l lllll ll ll ll l ll lll ll llll ll ll l lll llllll ll lll ll ll ll llll lll l ll l l ll ll llll ll ll ll lll lllllll lll ll ll lll l ll ll lllllll ll ll llll ll ll ll llll ll l lll ll ll ll lll lll lllllll llllll ll ll ll l ll lll ll lll ll lll llll llll lll ll l lllll lll ll l lll ll lll l lll ll lll l ll ll l lll l ll llll ll ll lllll l llll l llllll l lll llll lll lllll llll l lll ll l lll ll ll lllllll l l llll ll llll ll l lllll ll ll l lllll ll lll lll ll l ll l ll ll llllll l lllll lll llll lll llll lll lll ll llllll llll l l llll lllll llll l ll l llll ll l ll llll lll l ll l ll ll llll ll l ll lll l llll ll ll lllll ll lllll ll l llll ll ll l lll ll lll l ll llll lll llll ll ll ll lll ll One-sample Kolmogorov–Smirnov tests for normality p = 0 . p = 0 . p = 0 . p = 0 . p = 0 . We test H1 by analyzing whether the deception groupand the control group differ in the number of purposesthey effectively agreed to. To do so, we assign a scorefrom 0 (no purposes, by clicking on “Confirm selection”without checking any box) to 3 (all purposes by ei-ther checking all boxes and then clicking any button,or by clicking the highlighted default button). Partic-ipants who saw the deceptive dialog effectively con-sented to more purposes. Table 5 presents the scorevalues by treatment and control group. Since the scoreis a count variable, we use the non-parametric Kruskal–Wallis (KW) test, which indicates as strongly significanteffect ( χ (1) = 7 . , p < . ) in the hypothesized direc-tion. This supports H1.
The effect of the deceptivedefault button on agreeing to no or all purposes is inthe order of 20 %-pts., about four times larger than theplain default effect reported for an application consentdialog in [31]. We additionally check if participants inthe deception group are more likely to consent to allthree, instead of two or less purposes, than the controlgroup. A Chi-squared test ( χ (1) = 9 . , p < . )reveals a highly significant difference.To test H2a , we compare the measurements of re-gret before (RE-before) and after (RE-after) the partic-ipants got informed about the purposes they effectively
User Study of Consent Dialogs after GDPR consented to. Results of the paired t -test reveal a signif-icant difference between the before/after states for thedeception group ( t (49) = 2 . , p < . , d = 0 . ). Thissupports H2a.
The difference in the control group isnot significant ( t (51) = 1 . , p = 0 . , d = 0 . ). Thus,we can attribute the regret to the misinformation causedby the deceptive design.When analyzing perceived deception (PDE) fortesting H2b , a notable difference between groups can befound (Figure 4). The t -test shows that PDE of the de-ception group is significantly higher than of the controlgroup ( t (96 . . , p < . , d = 0 . ). Therefore, H2b is also supported.
Drilling down into the findings on H2a and H2b, weanalyze if participants within the deception group whoclicked on the deceptive default button perceive evenmore regret and deception after being informed aboutthe consequence of their response. Indeed, our measure-ments of RE-after are significantly higher for those whoclicked the default button compared to all other partic-ipants in T1 ( t (43 . − . , p < . ). However,no significant differences for perceived deception can befound ( t (47 . . , p = 0 . ). These two resultscan be explained by the presence of smart participantswho debunk the default button as deceptive and do notfall for it. They have less to regret than those who onlyunderstand the button’s effect after the fact.To test H3 , we investigate the time needed to com-plete the consent dialogs. The measurement starts whenthe cookie dialog appears and ends when the partici-pant clicks a button. This measure reflects the effortrequired for responding to the dialog. As shown in Fig-ure 5, participants in the group with reduced choicespent on average five seconds less on their response thanthose who were presented with three purposes. The dif-ference in medians shows the same trend, albeit lesspronounced due to the skewed distribution. We choosenon-parametric statistics to account for this fact. Whenonly comparing the deception and reduced choice group,the KW-test reveals that the difference ( χ (1) = 8 . , p < . ) is highly significant, which supports H3. We also find a significant difference between the re-duced choice group and the control group ( χ (1) = 9 . , p < . ). The difference between the deception andcontrol group, which offer the same number of purposes,is not significant ( χ (1) = 0 . , p = 0 . ). The resultsindicate that the number of purposes is positively asso-ciated with cognitive load, even if the number of optionsis way below Miller’s “magic seven” [60].Regarding H4 , we interpret perceived difficulty(PDI) as a measure of dissatisfaction. Specifically, we Table 5.
Overview of results by treatment group.
GroupDependent T1 T2 Controlvariable ( n = 50 ) ( n = 48 ) ( n = 52 ) Test
Number of effectively consented purposes
KW-test0 . . – – ** H1 .
0% 100 .
0% 100 . Construct means t -testPDE * H2bPDI n.s.RE-after n.s.PA 3.50 3.70 3.62 – ” – (only subjects who clicked the default button) ( n = 27 ) ( n = 28 )PDI 2.78 2.72RE-after 3.20 3.48 –PA 3.32 3.56 – RE-after minus RE-before (all subjects) paired t -test Response time for consent dialog (seconds)
KW-testMedian
Correct recall of effective purposes χ -test % ** – ” – (only subjects who clicked the default button) ( n = 27 ) ( n = 28 ) .
6% 53 . –Legend: * p < . , ** p < . , n.s. not significant.The test results refer to the bold values in the same row. test the difference between the two treatment groups inorder to show whether the number of purposes in theconsent dialog affect the participants’ perceptions. Sincethe t -test results in no significant difference ( t (95 .
99) =0 . , p = 0 . , d = 0 . ), H4 must be rejected.
Unre-lated to our hypotheses, we also tested for differences inPDI between the control group and T1, respectively, T2.No test result was even close to statistical significance.
User Study of Consent Dialogs after GDPR
T1 T2 Control12345 Group P e r c e i v e dd e c e p t i o n Fig. 4.
Perceived deception (PDE) by treatment. The t -testshows that PDE is significantly higher in T1 than in the controlgroup ( t (96 .
82) = 2 . , p < . , Cohen’s d = 0 . ). T1 T2 Control51015202530 Group T i m e ( s e c o nd s ) Fig. 5.
Time spent on responding to the consent dialog by treat-ment. The KW-test shows a significant difference between T1and T2 ( χ (1) = 8 . , p < . ). At the beginning of the questionnaire, we asked partic-ipants to recall which purposes they have agreed to inthe consent dialog. Thus, we are able to compare the ac-curacy of participants’ statements between groups. Asreported in Table 5, the difference is highly significantbetween all three groups ( χ (2) = 11 . , p < . ).When looking at the proportion of participants whodeclined all purposes, it is notable that 50% of the con- trol group, but only 32% of the deception group chosethis option. After informing participants about theirchoice, we specifically asked those who agreed to at leastone purpose whether they had been aware of the pos-sibility to decline all purposes. Only 32% stated to beaware of this option. However, the proportion of awareparticipants does not differ significantly between the de-ception and control group ( χ (1) = 2 . , p = 0 . ). Itseems that even the design of our control dialog, possiblyin combination with learned expectations, imposes somepressure to select at least one option on a subset of theparticipants. This highlights that future research couldseek to improve the communication of the “freely given”aspect of GDPR-compliant consent (cf. Section 2.1).To test whether users’ privacy attitudes regardingcookies influence their reaction to the cookie dialog, wealso test the relationship between the number of cho-sen purposes and PA. For this analysis we only con-sider the groups that were presented all three purposes.We find a weak but significant negative correlation be-tween privacy attitudes and the number of consentedpurposes ( r s = − . , p < . , n = 102 ). However,the difference in PA between those who clicked the de-ceptive button and those who did not, is not significant( t (91 .
9) = − . , p = 0 . , n = 98 ). Moreover, as ex-pected, privacy attitudes do not differ significantly be-tween groups as participants were randomly assigned togroups. This reassures us that the PA items measuretrait rather than state. Next we reflect on the results, then discuss limitations(Section 6.2), and comment on recent developments inthe space (Section 6.3).
Our experimental results confirm the common conjec-ture that design elements of consent dialogs can nudgeusers towards making specific choices. We show empiri-cally that the selection of data processing purposes, asrequired by the GDPR, is not exempt: users accept moredata collection purposes when consent dialogs integratea highlighted default button that selects all purposesat once. Surprisingly, we observe a four times strongereffect for our multi-purpose consent dialog than previ-ously reported for simple default buttons in binary con-
User Study of Consent Dialogs after GDPR sent dialogs. Moreover, the fact that users who click thisbutton are less likely to correctly recall the consentedpurposes casts doubt on the morality and legitimacy ofthis design element, as it might lead users to act againsttheir intention. This interpretation is further supportedby the finding that users tend to regret their decisionafter being informed about the effective purposes.Besides the effect of deceptive default buttons, wepresent more encouraging results on the possibility ofdifferentiating between consent decision for multiplepurposes in one dialog: although the number of purposessignificantly affects the response time, the difference inperceived difficulty is insignificant. This indicates thatmost users can handle three different purposes withoutexperiencing the negative effects predicted by the the-ory of choice proliferation. Of course, more research isneeded to investigate the critical number of purposes.Also choice structure, the other relevant determinant inchoice proliferation, requires further attention [13].Our analysis of control variables reveals that userswith stronger stated privacy attitudes consent to fewerpurposes. While this result seems to challenge the pri-vacy paradox (a term for the often observed discrepancybetween stated attitudes and privacy behavior [61–63]),it must be interpreted with caution. First, our instru-ment is not ideal to study the paradox. It confoundsthis relationship with the dominant effect of a decep-tive default button and measures the privacy attitudeonly after recalling the effective purposes. Second, un-like in many studies that find the paradox, our itemsmeasure privacy attitudes quite narrowly for the spe-cific domain: two out of three items mention cookies.According to the principle of compatibility, behavior ismore predictable from attitudes if it is measured on thesame level of specificity [64]. Third, the interpretationof attitude–behavior links is problematic if the behav-ior is partly unintentional, such as accepting undesiredpurposes. To some extent, this corroborates nuanced orcritical perspectives on the privacy paradox [65].
To gauge the relevance of our results, one may ask howprevalent the tested dialog is on the web. Unfortunately,reliable data in this dynamic space is scarce. The mostrecent data in [9] refer to a snapshot in August 2018 andthus predate the introduction of the dialog on the airlinewebsite, where we discovered it, and possibly elsewhere.According to this snapshot, only 7% of web consent no-tices are blocking, and 8% present multiple purposes
Table 6.
Robustness of the main effects: p -values of hypothesistests broken down by the location of the classroom experiment. Hypothesis and contrast groups Austria Germany ( n = 90 ) ( n = 60 )H1: T1 vs Control 0.028 0.090H2a: RE-before/-after in T1 0.036 0.098H2b: T1 vs Control 0.019 0.564H3: T1 vs T2 0.031 0.041H4: T1 vs T2 (rejected) 0.572 0.560 (immediately or on request). These shares almost cer-tainly increased with the adoption of consent managersin the course of 2019 (see Sect. 6.3 below).However, our choice of stimulus was not drivenby the most prevalent design, which we and otherresearchers [5, 9, 10] suspect to trivially violate theGDPR. Instead, we set out to study elementary designoptions of multi-purpose dialogs, the novel and mostunder-researched aspect of consent dialogs. Our dialogimplements opt-in and does not proceed without an af-firmative action (blocking) in order to anticipate futuregood practices. The fact that similar dialogs are usedby respectable organizations with competent legal de-partments and millions of unique users per year addsto the relevance. More importantly, since we study indi-vidual effects derived from theory, the prevalence of ourstimulus material is of subordinate importance. We aimto identify generalizable effects, which could be studiedon real or artificial dialogs. The choice of using a realdialog for inspiration along with a credible cover story ismerely one of multiple measures to assure the externalvalidity of our lab study.To check for possible risks to external validity, weanalyzed the participants’ free-text responses for prej-udiced assumptions about our study. Only one partici-pant exhibited demand characteristics [66]. The personwrote that he or she has agreed to all purposes becausethe website was part of a scientific experiment. All re-maining participants answered as if they were dealingwith an actual flight search website. Moreover, we donot find further indications that the participants mighthave perceived our stimuli as artificial.It is important to mention that the study has limi-tations. First, the experimental setup may not fully re-flect users’ actual behavior regarding consent dialogs.Even though we made an effort to hide the research Figures extracted from media data of the airline’s online ser-vices, available from the authors on request.
User Study of Consent Dialogs after GDPR
Fig. 6.
Dialog of a commercial consent manager (August 2019). purpose of our study, we cannot rule out that partic-ipants might have guessed our focus on cookie choicesor privacy in general. Moreover, our sample is limitedto German-speaking computer science students who areprobably more educated about the functionality of cook-ies and the web in general. This known bias, however,does not compromise the upshot of this paper: if evencomputer literate populations fall for the deceptive de-sign, we must assume that the outcome for the generalpublic is even worse. We tried to mitigate all other dis-advantages of convenience samples by replicating theexperiment in two geographically distant universities.Table 6 confirms that H1–H3 are supported in bothpopulations. We chose a classroom experiment (andaccepted its limitations) in order to reduce unknown bi-ases due to participant self-selection, which is an acuteproblem of empirical privacy research [67, 68]. Preciselythe attitudes and beliefs of interest correlate with non-response and dropouts. For perspective, our completionrate is above 88% in all sessions, whereas the concur-rent field experiment received 110 completed surveysfrom more than 30,000 solicitations, translating to a re-sponse rate below 0.4% [9]. (The authors acknowledgethis bias and chose not to analyze self-reported dataquantitatively.) Some p -values for Germany are above 5% only because weconservatively use the two-sided test. The values for the one-sided test are half of the reported ones. Recall that group sizesin Germany alone may be below 20 subjects. A simple interface adjustment, meanwhile implementedin the airline website, is to change the button text from“Select all and confirm” to “Select and confirm” (andchange the function accordingly) as soon as the firstcheckbox is selected. While this breaks with the designprinciple that button semantics should be stateless, itmight avoid the severest mishaps where cognitive effortthat went into selecting purposes is wasted. It requiresanother user study with more participants to gauge ifthis modification reduces the disappointment in the sub-set of users who select at least one but not all purposes. In the past months, we (and others [10]) have ob-served other “innovative” consent dialogs, such as page-long lists of affiliate partners for third-party tracking,that call for tailored user studies through the lenses ofdeception and choice proliferation. For example, a popu-lar meeting scheduling service uses a modal dialog enti-tled “We value your privacy” with a prominent buttonlabeled “I accept.” To access literally hundreds of op-tions, one has to click on “Show purposes”, which is atext link next to three others (see Fig. 6). Interestingly,this dialog seems to be operated (and presumably eval-uated) by an intermediary specialized on consent man-agement. This fits into the picture where ENISA, a EUagency, mentions “consent management” as a new busi-ness opportunity for cybersecurity startups [69, p. 10].
This study presents new empirical evidence supportingthat design elements used in consent dialogs of popularwebsites might deceive users into agreeing to more dataprocessing purposes than intended. It complements themeasurement studies [5, 6, 8, 10] that emphasize thewide adoption of such “dark patterns” [56] as well as arecent field study on cookie banners [9]. Based on thesefindings, we can derive recommendations for user inter-face designers and policy makers.Our first recommendation reiterates calls respectthe user’s interest: instead of nudging users towardsagreements that mainly benefit the party who owns thewebsite, defaults should reflect either a privacy-aware This description applied to the time of writing in mid-2019.The checkbox logic had been changed once again when we revis-ited the website in fall 2019 for the preparation of the camera-ready version. This highlights the dynamics in this space.
User Study of Consent Dialogs after GDPR safe choice or elicit the majority’s preferences as a de-scriptive social norm. This could be achieved by design-ing a set of best-practice consent dialogs, incorporatingthe body of knowledge from behavioral privacy research.These templates can be made available to organizationswho value consumers’ privacy or seek legal certaintywithout commissioning an intermediary.However, past and ongoing efforts in the usable pri-vacy research community towards understanding howto nudge users into making safer choices are void if theindustry tries to achieve the opposite. Since the valueof personal data increases with the number of possiblesecondary uses [33], businesses have incentives to max-imize the number of consented purposes. It is temptingto call for a regulator or oversight body to step in andensure that dialogs are designed in the users’ interest.But we are hesitant about suggesting more (or morespecific) regulations for two reasons. First, the GDPRstipulates freely given, unambiguous, and informed con-sent. It may take a court decision to provide clarity overthe fact that the practices we observe do not meet theserequirements and hence cannot provide a legal basis forpersonal data processing. However, such decisions mustbe based on further empirical research. Second, the timeand cognitive effort millions of users regularly spend onconsent dialogs may not justify the outcome at the soci-etal level. Rather than mandating special forms of con-sent dialogs (which hardly work for devices without dis-play or network services that are not customer-facing),a policy priority should be the establishment of a stan-dard for non-interactive privacy preference negotiations.It seems that P3P [25] was 20 years ahead of itstime, and the do-not-track header too simple and polar-ized [70]. There could be a middle ground in which con-sent dialogs do not disappear. But their design movesfrom the hands of data controllers to developers of useragents, who compete for the best service in the datasubject’s interest. In order to foster competition, andnot to repeat the mistakes of do-not-track, it is impor-tant that browser and app vendors must be required tointeroperate with any privacy agent of the user’s choice.
Acknowledgements
We thank all participants of the pretests and the mainstudy. We also thank Daniel Woods, Henry Hosseini andour shepherd Blase Ur for useful discussions as well asthe anonymous reviewers for many constructive com-ments. This work received funding from the German Bundesministerium für Bildung und Forschung (BMBF)under grant agreement 16KIS0382 (AppPETs) and theArchimedes Privatstiftung, Innsbruck.
References [1] European Parliament and the Council of the EuropeanUnion. Regulation (EU) 2016/679 of 27 April 2016 on theprotection of natural persons with regard to the processingof personal data and on the free movement of such data,and repealing Directive 95/46/EC (General Data ProtectionRegulation) (2016)[2] European Parliament and the Council of the EuropeanUnion. Directive 2002/58/EC of 12 July 2002 concerningthe processing of personal data and the protection of pri-vacy in the electronic communications sector (Directive onprivacy and electronic communications) (2002)[3] European Parliament and the Council of the EuropeanUnion. Directive 2009/136/EC of 25 November 2009amending Directive 2002/22/EC universal service and users’rights relating to electronic communications networks andservices, Directive 2002/58/EC concerning the processing ofpersonal data and the protection of privacy in the electroniccommunications sector and Regulation (EC) No 2006/2004on cooperation between national authorities responsible forthe enforcement of consumer protection laws. (2009)[4] R. Leenes, E. Kosta. Taming the cookie monster with Dutchlaw — A tale of regulatory failure. Computer Law & Secu-rity Review (2015) 31, 3, 317–335[5] M. Trevisan, B. E. Traverso, Stefano, M. Mellia. 4 years ofEU cookie law: Results and lessons learned. In: Proceedingson Privacy Enhancing Technologies (PoPETs) (De GruyterOpen, 2019) 126–145[6] R. van Eijk, H. Asghari, P. Winter, A. Narayanan. Theimpact of user location on cookie notices (inside and outsideof the European Union). In: Workshop on Technology andConsumer Protection (ConPro) (2019)[7] S. Englehardt, A. Narayanan. Online tracking: A 1-million-site measurement and analysis. In: Conference on Computerand Communications Security (CCS) (ACM, 2016) 1388–1401[8] M. Degeling, C. Utz, C. Lentzsch, H. Hosseini, F. Schaub,T. Holz. Measuring the GDPR’s impact on web privacy.In: Network and Distributed System Security Symposium(NDSS) (Internet Society, 2019)[9] C. Utz, M. Degeling, S. Fahl, F. Schaub, T. Holz.(Un)informed consent: Studying GDPR consent notices inthe field. In: Conference on Computer and CommunicationsSecurity (CCS) (ACM, 2019) 973–990[10] I. Sánchez-Rola, M. Dell’Amico, P. Kotzias, D. Balzarotti,L. Bilge, P. Vervier, et al. Can I opt out yet?: GDPR andthe global illusion of cookie control. In: Conference onComputer and Communications Security (AsiaCCS) (ACM,2019) 340–351[11] J. R. Kling, S. Mullainathan, E. Shafir, L. Vermeulen, M. V.Wrobel. Misperception in choosing medicare drug plans.Harvard University working paper (2008) User Study of Consent Dialogs after GDPR
Fig. 7.
Pop-up with questionnaire. User Study of Consent Dialogs after GDPR
Deception (T1) Reduced choice (T2) Control
Fig. 8.
Original German version of the stimulus material.
Fig. 9.
Functional mock-up website offering flight search.
Fig. 10.
German version of the expanded cookie dialog afterclicking on “details”.
Table 7.
Constructs and corresponding items in German.
Item Item text (original)
Perceived Deception (PDE)
PDE1 Die Seite ist bezüglich der Cookie-Einstellungen un-ehrlich gegenüber ihren Nutzern.PDE2 Die Seite versucht Nutzer dazu zu führen, Cookie-Einstellungen zu wählen, die sie nicht wählenwollen.PDE3 Die Seite benutzt irreführende Taktiken, damitNutzer Cookie-Einstellungen wählen, die sie nichtwählen wollen.
Perceived Difficulty (PDI)
PDI1 Es war unverständlich, eine Auswahl zu treffen.PDI2 Es war frustrierend, eine Auswahl zu treffen.PDI3 ↔ Es war einfach, eine Auswahl zu treffen.
Regret (RE)
RE1 Ich bereue meine getroffene Auswahl.RE2 Ich würde meine Auswahl ändern, wenn ich dieMöglichkeit hätte.RE3 ↔ Ich bin mit meiner Auswahl zufrieden.
Privacy Attitudes (PA)
PA1 Der Schutz meiner Privatsphäre im Internet ist mirwichtig.PA2 Wenn Webseiten Cookies verwenden, schränkt diesmeine Privatsphäre ein.PA3 Ich bin besorgt darüber, dass meine Privatsphäredurch Cookies von Webseiten eingeschränkt wird.
Items marked with ‘ ↔↔