The Value of User-Visible Internet Cryptography
aa r X i v : . [ c s . CR ] M a r The Value of User-Visible Internet Cryptography
PHILLIP J. BROOKE ∗ Teesside University RICHARD F. PAIGE † University of YorkJuly 5, 2018
Abstract
Cryptographic mechanisms are used in a wide range of applications,including email clients, web browsers, document and asset managementsystems, where typical users are not cryptography experts. A numberof empirical studies have demonstrated that explicit, user-visible crypto-graphic mechanisms are not widely used by non-expert users, and as aresult arguments have been made that cryptographic mechanisms needto be better hidden or embedded in end-user processes and tools. Othermechanisms, such as HTTPS, have cryptography built-in and only be-come visible to the user when a dialogue appears due to a (potential)problem. This paper surveys deployed and potential technologies in use,examines the social and legal context of broad classes of users, and fromthere, assesses the value and issues for those users.
Keywords:
Security, cryptographic controls, legal aspects, regula-tion, risk management
Cryptography mechanisms are embedded in a range of software applications, in-cluding Internet banking and online shopping. These cryptographic mechanismsare, in some cases, entirely hidden from end-users; other mechanisms require theusers to interact with them directly, and we call these user-visible applicationsof cryptography . These mechanisms may involve users entering passwords orpassphrases for secret keys; other examples include dialogues related to resolv-ing problematic SSL certificates on web sites. The types of application thatwe are concerned with include email, web browsing, e-commerce and documentmanagement systems; these applications are used widely, particularly by non-ITexpert users. In all cases, the interactions that an end-user has with crypto-graphic mechanisms and applications take place in a social and legal context. ∗ School of Computing, Teesside University, Middlesbrough, TS1 3BA. [email protected] † Department of Computer Science, University of York, YO10 5GH. [email protected]
First, we outline the legal context for our assessment of cryptography in sec-tion 2: this work is initially from an English and Welsh perspective, but ac-knowledges the cross-border aspects of electronic transactions. We continue bysurveying existing related work concerning usability and security in section 3.Section 4 briefly introduces the common, underlying technology that con-cerns this work. We address emerging approaches in later discussion. Section 5sets out the scope of our survey. It describes the type of user we are concernedabout (a “general Internet user” stereotype) and introduces ten user stories thatwe use to motivate our later discussions. Section 5.2 identifies three categoriesof user-visible applications of cryptography.Sections 6–10 examine the application of cryptography, grouping the userstories together thematically. We consolidate and expand the discussions insections 6–10 and make some overarching comments in section 11 before con-cluding in section 12.
This work is based primarily in a English and Welsh legal context. However,the general observations should be sound in similar jurisdictions, particularlyderived legal systems such as Canada and Australia. Moreover, the observationsrelated to issues of data protection and transmission (which impact on severalscenarios) are also applicable to European jurisdictions subject to EU directives.We structure this part of the discussion into two broad areas: contracts andsignatures (integrity matters), and confidentiality and privacy.2 .1 Contracts and signatures
The formation of contracts requires agreement, and that agreement is ideallyrecorded. However, a contract can be made verbally as well as by hand-writtensignature, but the burden of demonstrating a verbal agreement is greater. Con-tracts may impose confidentiality and similar requirements on one or more par-ties.More relevant to computing, e-commerce requires that the making of con-tracts is mediated by computers. Thus a simple email indicating agreement,or completing an online form by clicking “accept” can be sufficient to form acontract. This brings us to the use of “signatures”, where examples includehand-written signatures, stamps or images of a company officer’s signature,email signatures that are automatically appended, typed signatures, and so on,all the way to cryptographic digital signatures. Thus we see that “signature”is a rather ambiguous term: Gutmann’s tutorial slides include greater detailin this area (Gutmann, 2007). Additionally, Mason (2011) gives a summary ofsome forms of electronic signature, and comments “the person relying on thesignature (such as where you say you did not sign a cheque, and the bank haspaid money out of your account on a cheque) must prove it was your signaturewhere you dispute it was not your signature. This is the same for electronicsignatures, although the vendors selling digital signatures try to reverse thisrule.” A thorough coverage of the legal issues surrounding electronic signaturesis in Mason’s book (Mason, 2012).Some legislation explicitly addresses the recognition of electronic signatures,such as the Electronic Communications Act 2000 (HMSO, 2000). A result ofthis is that a wide range of statements can be legally considered an “electronicsignature”. For example, Monitor, a regulatory body for part of the UK’sNational Health Service, interprets this to mean that“the following are all examples of an electronic signature • Typed name • E-mail address • Scanned image of a signature • Automatic e-mail signature” (Monitor, 2008)This point applies to other media, such as faxes. Chapter 6 of Mason (2012)provides a detailed analysis of the form of electronic signatures and commentson cases illustrating the variability of legal decisions. Mason also quotes the LawCommission writing on ‘Electronic Commerce’, including “[. . . ] the validity of asignature depends on its satisfying the function of a signature, not on its being aform of signature recognised by the law” and “Even if a click is less secure thana manuscript signature, reliability is not essential to validity.” This illustratesa distinction between the validity of a signature (essentially its acceptability)and the reliability of the method or form of the signature.3he later Electronic Signature Regulations 2002 (HMSO, 2002) introducethe notion of an “advanced electronic signature [which means an electronic sig-nature](a) which is uniquely linked to the signatory,(b) which is capable of identifying the signatory,(c) which is created using means that the signatory can maintainunder his sole control, and(d) which is linked to the data to which it relates in such a mannerthat any subsequent change of the data is detectable”where an electronic signature itself “means data in electronic form which areattached to or logically associated with other electronic data and which serve asa method of authentication”. Additionally, qualified certificates are introducedwhich have additional liability provisions. These Regulations follow the Euro-pean Directive 1999/93/EC on a Community framework for electronic signa-tures (European Parliament and Council, 1999; European Commission, 2011)“addresses three forms of electronic signatures: Basic electronic signature [. . . ]Advanced electronic signature [. . . ] “Qualified electronic signature” [. . . ]” andthese are criticised in Krawczyk (2010). Mason (2012) comments that the Euro-pean Commission “may make further efforts to encourage the take-up of digitalsignatures, in the face of overwhelming evidence that nobody seems to wantto use them, unless they are forced to do so.” A more general coverage of theevolution of documents and the use of technology, including cryptography, isgiven by Blanchette (2012). Although sometimes from a FrenchInformation from computer records themselves also has value. The CivilEvidence Act 1995 (HMSO, 1995) specifically places weight on the evidentialvalue of a computer record rather than the admissibility of the record itself.Thus records of businesses and public authorities can be relatively easily usedas evidence. A similar provision exists in the US court system in the form ofRule 803 (Federal Evidence Review, 2012).There is recognition of the need for reliability in the processes surroundingcomputer systems and evidence. For example, BS 10008 (British Standards Institute,2008) and the associated BIPs (Shipman, 2008; Shipman and Howes, 2008;Howes, 2008) provide substantial guidance; supplementary material includesa workbook to assist audit.Note that other than advanced electronic signatures and some references inBS 10008, nothing above explicitly requires any form of cryptography. Indeed,this, at least so far, poses no problem for the making of contracts in this context.
A major issue for information systems concerns data protection legislation, pri-marily, the Data Protection Act 1998 (HMSO, 1998), an enactment of the 1995European Union Data Protection Directive. This imposes obligations; for ex-ample, principle 7 states “Appropriate technical and organisational measures4hall be taken against unauthorised or unlawful processing of personal data andagainst accidental loss or destruction of, or damage to, personal data.” Thedefinition of “appropriate” is, of course, subject to each individual case. Formalnotions of “data controller” (a person or persons responsible for the process-ing of data) and “data processor” (for outsourcing of processing) are given inthis Act. Substantial guidance exists, along with a range of standards suchas the ISO 27000 series. Besides regulatory requirements, information usuallyhas value to both individuals and businesses, regardless of the presence or ab-sence of personal data. All this needs protecting in the traditional senses ofconfidentiality, integrity and availability.Classic examples involve sensitive medical records, bank account and creditcard credentials. Within the UK, the Information Commissioner’s Office is re-sponsible for enforcement. Some remedies are also available for data subjects(such as demanding the correction of erroneous records). We remark that publicreports of legal action following security lapses are unusual, with Sony’s recentsecurity problems being an exception (Kuchera, 2011). However, such lapsestend to be either failure of access control or loss of devices or media with plain-text data. We discuss this further in section 11.Privacy is related to, but not synonymous with confidentiality. In this survey,we do not need to consider these difference further with the exception of notingthe recent regulations regarding cookies (Information Commissioner’s Office,2012) due to European Directive 2009/136/EC. Compliance with these regu-lations is interesting due to the contrast with consent (partially discussed abovein relation to clicking “accept”): for example, “Implied consent is a valid formof consent”. Additionally, there are broader matters of (mis)use of web tech-nologies (including cookies) in malware and surveillance.
Previous work assessing the effectiveness and value of cryptography has exam-ined usability as well as PKI issues. These areas dominate this paper, so wediscuss them here. We also introduce further literature where relevant in thesequel.A classic paper in the usability field is Whitten and Tygar (2005) which con-cerns the ability of users to use PGP 5.0: “Our 12 test participants were gener-ally educated and experienced at using email, yet only one-third of them wereable to use PGP 5.0 to correctly sign and encrypt an email message when given90 minutes in which to do so”. More generally, Furnell and others have inves-tigated the usability of end-user software at length, and find continuing prob-lems with interfaces (Furnell et al., 2006; Furnell, 2007; Ibrahim et al., 2010;Sweikata et al., 2009; Cranor and Garfinkel, 2005; Gutmann and Grigg, 2005).Ho et al. (2010) examined the setup of home wireless networks, and found that“users did not understand the difference between access control lists and en-cryption, and that devices fail to properly notify users of weak security config-uration choices”. They proposed a configuration wizard to partially mitigate5ome of these problems. Zurko and Simon (1996) introduced the term “user-centered security” and discussed the application of usability testing to securesystems. Some attention has also been paid to the education of users in the useof security-related software (Reid et al., 2005).Others comment on the software itself. Kapadia (2007) remarks “I foundthat [OpenPGP applications] were unusable with nontechnical correspondentsbecause it required them to install additional software”, which relates to someof our remarks on systems such as IronPort and Hushmail in section 7.3. Weused a similar approach of server-side cryptography in support of documentsecurity (Brooke et al., 2010).Other work assesses what the users understand about security concerns:Gross and Rosson (2007) interviewed twelve users with differing roles to answer“What do users know about security and threats?”; “How do users manage theirsecurity concerns?” and “Who do users believe is responsible for security, andhow do they perceive their role in security?”, noting that “entire organizationscan be brought down by security failures”. Later work suggests that users dodifferentiate between security (and privacy) concerns and more general computerproblems (such as hardware failure) (Gross and Robson, 2007).Previous work has also examined PKIs and questioned their effectivenessand usability (Gutmann, 2003; Straub and Baier, 2004). Moreover the need for PKIs, electronic signatures, etc. is not clear in practice (BILETA, 2011)(and our earlier comments in section 2.1). Alternatives involve opportunisticencryption (Garfinkel, 2003b), key continuity management (Gutmann, 2004;Garfinkel and Miller, 2005), identity-based encryption (Shamir, 1985; Martin,2006) and email-based identification and authentication (EBIA) (Garfinkel, 2003a).However, we are not concerned with some other security properties, such asanonymity in systems such as Mixminion (Mathewson and Dingledine, 2004).More broadly, notions of return on security investment (National Institute of Standards and Technology,2005) attempt to capture the return on investment in security processes, policiesand infrastructure, though this focuses on capital investment rather than valuedelivered to end-users. An interesting variation is due to Herley (2009), whoargues that users’ rejection of much conventional security advice (for example,ignoring SSL certificate warnings) is rational. This is on the basis of out-of-date advice and false positive warnings against the cost (to the end-user) ofacting on this information. Herley examines password rules, phishing site iden-tification and SSL certificate warnings and comments “the burden [to the end-user] ends up being larger than that caused by the ill it addresses”. Similarly,B¨ohme and Grossklags (2011) argue that human attention is a scare resource.They too make the point that user inattention can be rational, and producea simple game model to illustrate typical options for users. A possible way toreduce the demand on attention is the use of social navigation, as suggested byGoecks et al. (2009). They present prototype tools which describe other users’security decisions ( e.g. , for cookies and firewalls), although it proved less usefulfor more complex or ambiguous decisions.6
Underlying technology
Cryptographic technologies typically address confidentiality and integrity issues.The underlying mathematical concepts of these technologies are the same: bothemploy a range of asymmetric and symmetric algorithms ( e.g. , RSA and AESrespectively). Typical operations include • key generation, both for long-lived public/private keys as well as transientsession keys; • encryption and decryption (confidentiality); • signing and verification (integrity); and • hashing ( e.g. , as part of signing, or deriving a key from a password orpassphrase).We do not dwell on the mathematical approaches (an appropriate starting pointis (Schneier, 1996)), but instead on how they are encapsulated into the appli-cations and made visible to the user. Later, in section 7.2, we see that thisencapsulation is not trivial; for example, different software can interpret nor-malisation of messages in different ways resulting in false bad verification ofsignatures.As well as understanding the basic capabilities and scenarios of interest fornon-expert end-users (section 5.1), we also must clarify the technical context inwhich they work. We briefly summarise several major groups of cryptographicsoftware; our end-users will likely use one or more of them either explicitly orimplicitly. CMS or Cryptographic Message Syntax, based on PKCS
X.509 itself defined in RFC5280 (Cooper et al., 2008), provides the most com-mon format for public key infrastructure (PKI) data for the Internet.This usually leads directly to the certificate authority trust/validity model.Certificate revocation lists are also supported in X.509; however, the On-line Certification Status Protocol (OCSP) (Myers et al., 1999) perhapsprovides an alternative giving more timely updates.
SSL/TLS
Significantly for our example users, SSL/TLS is widely deployed onwebsites and mail servers. (Although not identical, we use SSL and TLSas synonyms in this work.) In this role, it is a near-ubiquitous protocolwith native support in common web and mail clients. Public keys (asX.509 certificates) are obtained in the initial SSL/TLS negotiation. Therelying party then needs to verify that the presented certificate is signedby a trusted root certificate, possibly via intermediaries. We return to thisissue in section 11.4, including comments on alternative approaches.7 penPGP defined in RFC4880 (Callas et al., 2007), is an alternative to S/MIMEfor email messages as well as for general file encryption and signing, basedon Zimmermann’s PGP. X.509 certificates are not used in OpenPGP; in-stead a web of trust is usually used instead. The web of trust is notthe only option: single and multiple key validation models are supported.Both PGP and GnuPG support this standard and are broadly interoper-able.Other interesting technologies timestamping services, key servers (for OpenPGPkeys), and other means of obtaining up-to-date keys, such as integration into Ac-tive Directory and LDAP. The Simple Public Key Infrastructure (SPKI) (Ellison,1999; Ellison et al., 1999; Ellison, 2004), described in experimental InternetRFCs, concerns a more local naming scheme. We will return to some key man-agement issues later in the discussion. More user-friendly approaches includeHushmail and similar services (discussed in section 7.3).
We will define what we mean by user-visible applications of cryptography insection 5.2, and first describe the type of users we are concerned with.
The typical users of interest are1. domestic users with tasks such as social email, online shopping and e-banking;2. office workers, using software such as office productivity applications, un-dertaking sensitive discussions by email, or working with sensitive datasuch as personal data;3. supervisors and managers interacting with other staff, authorising, ap-proving and auditing business processes;4. non-IT-specialist users installing or upgrading software, e.g. , operatingsystem updates, plugins such as Adobe Flash and entertainment software.A scenario-based approach (Carroll et al., 1998; Rosson and Carroll, 2002)allows us to structure the analysis by end-user concerns. From an analysis ofthe literature and incidental observations of end-users we identified a set of tentypical scenarios where cryptography plays a role:1 — Browsing a social website2 — Buy goods via a website3 — Online banking 8 — Social email5 — Sensitive discussion by email6 — Agree a contract by email7 — Install or upgrade software8 — Internal application process9 — Signing a form10 — Data confidentialityBrowsing social websites was common to most computer users, with Facebookparticularly prevalent. Most users had experience of ordering from the Internet,such as Amazon, and using the Internet for banking. Social email is perhapsless common than previously (we speculate that social sites such as Facebookaccount for this; however, this was not investigated further), although all usedemail as part of their work. Those in sensitive areas (healthcare, criminal justice)often engaged in discussions of cases by email. Few had agreed formal contractsby email, but negotiations that had an impact on subsequent contracts werecommonly mediated by email. Nearly all users had installed software, oftengames or plugins as well as downloading applications to mobile devices ( e.g. ,iPhones, Android devices). The larger organisations had formal processes thatinvolved rigid workflow processes as well as requirements to “sign” forms insome way. The final user story, dealing with confidentiality of data, concernedthose users working with “personal data”.The ten scenarios are a representative set to allow us to break down userinteractions with cryptography. We do not claim they are complete; there areother specialised cases that we do not attempt to address. Instead, we areconcerned with a “general Internet” stereotypical user without specialist skillsor needs; we do not address scenarios such as the use of ATM cards or RFID-based and similar access control systems. Later sections group these user storiesthematically; subsequently, we consolidate the points in section 11.Before we can analyse these scenarios, we must say more about our assump-tions relating to our users and their environment. As we have suggested already,we do not address relatively small, specialised user groups with very high se-curity demands. These specialised populations can reasonably be expected toundertake appropriate training and be supplied with suitable equipment fortheir tasks. Instead, we are interested in day-to-day use of computers.A common assumption to all these users is that they have basic computerskills, e.g. , word processing and email, but they are not IT specialists and haveno need (nor interest, often) to be IT specialists.Our analysis required us to make assessments of risk. We followed the com-mon method of identifying the likelihood as low, medium or high, and the impactas low, medium or high. A typical approach then assesses the overall risk aslow, medium or high from the likelihood and impact. In the sequel we discussthe risks identified, starting with the highest.9 .2 User-visible applications of cryptography
There are three categories of user-visible applications of cryptography that weconcern ourselves with here.1. The most obvious user-visible application of cryptography is the direct,elective invocation of a cryptographic tool, e.g. , PGP or GnuPG.2. Indirect but still explicit, elective use of cryptography involves examplessuch as • asking an S/MIME email client ( e.g. , MS Outlook) to encrypt or signan email; • encrypting or signing a document in an office application ( e.g. , MS Of-fice, LibreOffice); or • selecting encryption in a ZIP archive application ( e.g. , 7-Zip).Sometimes this is a simple as ticking a box to select encryption and giv-ing a password which is subsequently used (in some form) as a key to asymmetric algorithm. Others, such as signing office documents, requiresat least a user certificate for an asymmetric algorithm or a full PKI.3. Much cryptography occurs in the background. Web browsers and emailclients can automatically use SSL, discussed further in Sections 6 and 7.This is implicit and should be unobservable by the user until there is aproblem, such as an out-of-date or otherwise invalid certificate causes theclient software to warn the user.The examples above in categories 1 and 2 usually affect the recipient. Asigned document might not require any special interaction, yet the client soft-ware may report the state of the signature, possibly raising dialogues or showingwarnings. In other cases, the recipient may be completely unaware of the sig-nature ( e.g. , an office document with an embedded signature, or a multipartsigned email) or conversely, the document may be unreadable without usingspecialist software (such as ASCII-armoured signed emails).Encrypted emails and ZIP archives nearly always require a direct interac-tion to give the relevant key, usually in the form of a password or passphrase.For an email, the relevant private key may already be accessible for automaticdecryption, as in some configurations of MS Outlook.Similarly, the first two categories may require explicit key management onpart of the users.In this survey, we concern ourselves with the examples above where theuser becomes aware of the presence of some problem or issue in the underlyingstructure. Importantly, the user does not necessarily have to relate this to acryptographic system at all; consider Ho et al. (2010)’s comments on users notunderstanding the difference between different concepts.Although we will discuss some issues of endpoint security, mostly in sec-tion 11.3, this is in relation to the overall risk for different scenarios. Thus wedo not discuss the use of passwords and other authenticators beyond that.10 Web
We now examine the ten user stories, grouped thematically. We start with threetypical web-based scenarios.
User story 1 — Browsing a social website
Alice reads and sometimes postson a social website, e.g. , FaceBook or web forums. We suggest that theoverall risk here is low: antisocial behaviour and account hijacking arethe main risks, but the assets concerned are limited, at least from Alice’sperspective. A greater risk might be posed by Alice posting somethingshe later regrets.
User story 2 — Buy goods via a website
Alice wants to buy somethingfrom an e-commerce site. She will necessarily use her credit card or aservice like Paypal. Either way, at some point, she has to pay money inthe expectation that the purchase is delivered as specified. The risks arehigh here: phishing, website spoofing and non-delivery of goods are thecanonical examples, along with theft of payment and other details fromthe recipient site.
User story 3 — Online banking
Alice views account details and pays billsusing her bank’s online service. The risks here are as in the previous story:bank details have an obvious value to criminals.Although relatively obvious, we can find illustrations of these user stories in An-derson’s text 2008 in sections 23.3.3, 23.3 and 1.3 respectively, and additionallyfor the latter two user stories in Cronin (1998). Evidence of interest in socialnetworking more broadly can be seen in SOCIALNETS (2009).Secure web connections via the HTTPS protocol are relevant to these threeuser stories. In each case, Alice will have to point her web browser to the correctURL: this URL might have been bookmarked from a previous visit, found viaa search engine or typed in, perhaps from an advert in a newspaper, or frommemory.Before we consider HTTPS directly, let us address the risks. The main risk isthe compromise of login credentials: these credentials are useful to attackers forharassment/nuisance via social media, theft from online banking or misuse ofcredit card details. Compromised credentials can then be used to call into ques-tion the integrity of any transaction involving those credentials or to present thepossibility of compromised credentials for “plausible deniability”. Additionally,the re-use of passwords, even on ostensibly low-security websites clearly permitsfurther exploitation of credentials: “a substantial number of the randomly veri-fied email accounts revealed that 75 percent of the users rely on the same pass-word to access both their social networking and email accounts” (BitDefender,2010).At some point, payment details are required. The web browser is redirectedto a “secure page” accessed via HTTPS if the entire site is not already HTTPS11ased. At this point, we encounter our first problem. The reliance on certifi-cate authorities for X.509 certificates to bootstrap what is essentially a trustrelationship has been highlighted previously (Perlman, 1999) and was broughtsharply into focus with the Comodo compromise in 2011 (InfoSecurity, 2011)and the more recent issues with DigiNotar (Corbet, 2011). A secondary issueto the Comodo and similar compromises concerns the limited use of CRLs andOCSP by clients to revoke bad certificates. Mozilla Security Blog (2011) re-ports that the offending certificates were quickly revoked using both the CRLand OCSP mechanisms. We see other examples of this later. Trust in thecomputers concerned is a deeper problem (Parno et al., 2010).Common advice given to users for e-commerce transactions typically includes“Check that the padlock sign is shown on your browser and that the URL in-cludes https .” Regardless, users still find it difficult to assess whether or not “aconnection [to a web site] is secure” (Friedman et al., 2002). Complications in-clude extended validation and more sophisticated phishing attacks (Jackson et al.,2007). Kirlappos et al. (2012) argue that trust seals are ineffective, and con-clude that “automatic verification of authenticity” is required. Rapidly chang-ing browser environments are also likely to confuse users; for example, MozillaFirefox has changed its indication of secure connections several times (Shultze,2012). What Alice really needs is sufficient evidence that her web client is con-nected to the correct server and that the connection to that server is encrypted.Observation of some user populations at our institutions (in our cases, aca-demics and students) demonstrates that the security afforded through CAs isbrittle at best. Warning dialogues are often disregarded (Likarish et al., 2008):we have effectively trained our users to ignore the warnings because they have toworkaround problems. One of the ICT departments at the authors’ institutionsincluded instructions to set up a wireless connection which explicitly directedthe user to accept an invalid certificate because of the server’s setup.The difficulty in assuring that the client has connected to the correct serveris one factor that enables phishing. In one sense, this is an artefact of a globalnaming scheme (the DNS) and we see that SPKI suggests local naming schemesin closed groups. But this poses difficulties for, say, the banking scenario.A moderately na¨ıve solution for online banking would be for a bank to tellusers the fingerprint of the correct certificate: but we do not believe that anybut the most security-conscious user would actually check this. Essentially, thecomputer is a tool and fine management of it is simply not a conscious matterfor the user. Hence our focus on user-visible applications of cryptography.The issue of root certificates aside, the actual usability is relatively good: wedo not see people having great difficulties making e-commerce purchases. Wereturn to this in our discussion in section 11.
Our next set of scenarios relates to use of email, at different levels of sophisti-cation and hence, with different requirements for use of cryptography.12 ser story 4 — Social email
Alice wants to email her relative or friend,say, Bob. The overall risk is low: the main asset is the email, and itis unlikely to be particularly valuable although potentially embarrassing.From Bob’s perspective, someone pretending to be Alice is a very low risk.
User story 5 — Sensitive discussion by email
Suppose Alice and Bob worktogether and need to discuss a serious problem with a particular task.Email is one possible medium. The risks revolve around confidentiality.
User story 6 — Agree a contract by email
Alice agrees by email to un-dertake some work for for a small business. The main risk here concernsnon-repudiation by the business or vice versa . Thus it is not so much anissue of making the contract but one of evidencing that the contract hasbeen properly made, i.e. , that the elements of consideration, intention,offer and acceptance are all present.An example of sensitive email is given in Gaw et al. (2006). Movement ofemail services into the “cloud” is advancing, with outsourcing to Google andHotmail in evidence, and along with suggestions for the US Federal Govern-ment (Cloud Computing Security Working Group, 2011).Email is, for many users, an effective communication medium, although theprevalence of both spam, which we do not directly address, and large volumesof legitimate email can degrade this. The essential risks here are twofold: oneis the loss of confidentiality, the second risk concerns spoofing or modification(integrity) and non-repudiation.This is a good example for opportunistic encryption (Garfinkel, 2003b).A mail user agent or mail submission agent connecting to a server may useSSL/TLS to encrypt the conversation with the server. This has the same prob-lems as for web servers, i.e. , how does the user know that they have connectedto the correct server? But differently from the HTTPS example, mail serversare arguably harder to spoof. Two major classes of mail server are those withina particular business and those for the user’s ISP. In both cases, we shouldhave a good level of confidence that the relevant part of the DNS is correct,at least from the client’s perspective, and that regardless of the certificate, wehave connected to the correct server. Some users may be in a closed or partiallyrestricted environment ( e.g. , heathcare) further reducing the incidence of prob-lems. However, this observation leads us to a further point: within a particularbusiness, how many users are likely to be actively sniffing the network?We develop this point further. Older, hub or broadcast-type networks arevery easy to monitor for other users’ traffic. Newer switched wired networks areharder to monitor although some switches are believed to degrade to operateas hubs. Wireless connections are an instance of broadcast networks, which arepotentially easier to monitor unless encrypted, say, WPA2. Since it is relativelycheap and easy to arrange for a mail server to offer SSL/TLS connections, it isproportionate to do so and thus not worry about any possible sniffing by insidersor those with access to the network. 13obile users provide a complication. The argument above does not applyto a user temporarily visiting another organisation or using a hotspot as theycannot rely on the infrastructure to the same degree (for example, there is moredelegation of DNS). This is no worse than the general HTTPS case.Thus for most users, they can assume that their ISP or business mail serverdoes receive their email, and opportunistic encryption using SSL/TLS defeatsany local sniffing. However, if the ISP or local mail server is not trusted, theuser may be reluctant to trust this encryption of the connection. Further, this isonly transport encryption, not storage encryption. The email must be stored onthe mail server, even if only transiently, as email must be stored temporarily oneach server that handles it. Certainly in the case of a business mail server, thereis a significant broader problem: if the users cannot trust their own servers, thenwhat else is wrong with the infrastructure?
This leads us to consider S/MIME and OpenPGP for emails. Capable usersmight choose to generate key pairs and use one of these cryptosystems to en-sure confidentiality of their messages. However, these are, by observation, a tinyminority of the population as a whole. One barrier to adoption of this approachfor secrecy is the need for the recipient to have a public key; this results in mul-tiple attempts to create public keys on demand, e.g. , identity-based encryption.We discuss these and similar approaches such as Hushmail in section 7.3.Moreover, within a particular organisation —with an assumption of a trustedinfrastructure— the emails are already safe due to opportunistic encryption,other than at the endpoints. These endpoints are the sender’s and receiver’scomputers. Here, we can remark that some user’s security hygiene is negligible, e.g. , our remarks about screenlocks on page 25. It is, of course, notable thatusers’ desktop machines are a major entry point of malware, via the web orUSB sticks. For example McQueen (2010, slides 108–109) reported that 20% ofusers inserted a thumb drive found in a public place into their computer. In ourdiscussion (section 11) we further comment on endpoint security.Returning to the point of opportunistic encryption, we note that discovery ofthe correct settings can be challenging. We speculate that increased outsourcingof email services in large organisations may be to blame. Some software, suchas Apple’s Mail, seems remarkably robust. Mozilla’s Thunderbird needed muchhelp to connect to the student mail system at one of the authors’ institution.Additional aggravations concern the use of passwords and passphrases usedfor securing cryptographic keys. For example, some systems do not require apassword after importing a PKCS12 file: the private key is accessible on demand.Thus someone with access to that desktop machine can read any email, even ifit is encrypted to that particular key.Further, key management remains a major problem. Gutmann (2003) re-ports that obtaining a key from a public CA “takes a skilled technical userbetween 30 minutes and 4 hours work”. Little has changed since then, and inany case, these certificates are “low value”. Local CAs using the SPKI model14/MIME OpenPGPMS Outlook nativeMozilla Thunderbird native Enigmail pluginApple Mail nativeAlpine native & filters various filtersTable 1: Email clients examinedcan more easily issue certificates for their own servers, and can ensure thatcentrally provisioned machines have the relevant root certificate installed. Butexternal users do not benefit from this.The problem goes on step further. We have seen examples of users in the pub-lic sector sending emails with S/MIME signatures. “Good”, one might think.However, the certificate issuer is one of these local CAs: we can decide to acceptthe issuing certificate in our mail client. But some software, such as gpgsm takesthe decision that certificate revocation lists must be checked: this is correct inour view. At this point, we discover that the machine which serves the CRL isnot accessible outside of that organisation. The value of the CRL, and thus thecertificate overall, is massively reduced. Moreover, the particular characteristicsof the organisation in our example make it very unlikely that unauthorised userswould have access to even that organisation’s buildings, let alone the computerswithin them.Even if a user perseveres and obtains a key for use with their email client,configuration and setup often remains challenging. Dialogues remain unintuitivefor the most part. In the course of other work, we counted 8–9 steps to importan S/MIME certificate from a PKCS12 file, depending on email client.So we assert that there is no real value in signed email except in the casewhere users have both good reason to fear spoofing or modification or theirmessages, and when they have had opportunity to confirm, ideally face-to-face,that the cryptographic certificates are correct.
Even if we addressed the issues above, interoperability is poor in contrast to gen-eral use of web browsers with HTTPS. We examined a range of email clients,using versions current in early 2011, as listed in Table 1. Although S/MIME isgenerally well-supported natively, OpenPGP often requires plugins and these arenot available for some MUAs, notably MS Outlook. This means that communi-ties of users need to agree on the cryptosystem to be used; yet these communitiesare often not well-defined and have porous boundaries.We sent and received emails using either S/MIME or OpenPGP. For OpenPGP,we examined both “inline” and MIME/OpenPGP messages. Encrypted mes-sages were uncomplicated and mostly worked. On some occasions, they weresimply not recognised and were ignored by the client: the common feature intheses cases is that the multipart/encrypted message was not the top-level15IME part. However, such messages are entirely valid in terms of MIME andarguably, could occur in practice when digests are sent.Verification of clearsigned messages was much more brittle. Again, someclients required that the multipart/signed message was the top-level, or itwould be ignored and not displayed. Others had trouble verifying messages theyhad sent themselves! Clearsigning is strongly preferred over opaque messages,as clearsigned messages are readable by users who do not have software capableof verifying the signature.When we find that some mail servers also rewrite MIME messages causingclearsigned messages to fail to verify, we conclude that the technology remainstoo brittle and interoperability is relatively weak. This is disappointing afterso many years. The problems are well-known, including suitable treatmentof whitespace, line-endings and character sets (indeed, we had to address thesame canonicalisation process when working with XMdoc (Brooke et al., 2010)).That email systems remain so brittle in respect of clearsigned messages mitigatesagainst their use, as false negative verifications degrade the usefulness of signingeven further. They lead to the same issue that we encounter with web servercertificates, where users are trained to ignore the warning messages, if theyactually check the signature at all. Indeed, we speculate that it would takeother users a long time to notice if we sent signed emails with a revoked key.
Sending an encrypted email requires that the recipient has a key to decrypt it.Both symmetric and asymmetric cryptosystems have well-understood problems.Identity-based cryptosystems are rooted in Shamir (1985)’s work; other workincludes Martin (2006). Typically, the key generating centre is a trusted thirdparty, and can compromise the system. This is not necessarily a problem,given that some trust is required at some point. Boneh and Franklin (2003)provide an example of an identity-based encryption system, and give severaluseful properties such as restriction to dates and security classifications, easyrevocation and delegation of decryption keys. Cocks (2001) describes a schemebased on quadratic residues, and comments that multiple authorities “will bedesirable”. This point is addressed by Lee et al. (2004), Gentry (2003) andsimilar work, although the fine details do not concern us at this point. Ingeneral, we need to trust some infrastructure, and simpler schemes have obvioussingle points of failure and escrow.A related approach is to make this as transparent to the end-user as possible,particularly in terms of software requirements which we relate to the earlierquote in section 3 from Kapadia (2007)). We use IronPort (Cisco, 2011) andHushmail (Hushmail, 2011) as exemplars here. Both can use a Java appletso that decryption occurs on the client machine. Additionally, both offer anoption for processing messages on the server machine via a secure web session.In this latter configuration, these services are not significantly stronger thanHTTPS as described above: this is recognised in such services (Hushmail, 2010,2011; Singel, 2007). Some implementations send the email directly and only the16ecryption key is escrowed, which has some positive impact.A positive side effect is that policy engines such as IronPort can be usedto reduce the “fat-fingering” of emails by requiring that all out-of-organisationemails are subject to policy enforcement ( e.g. , encryption, or simply disallowingsome outbound traffic).Thus our point remains: for a typical user, what threats does this mitigate?The endpoints remain a problem: for example, some local users of health servicedata receive messages via a secure email service of a similar design as discussedabove. But the data is stored locally, as plaintext. If we combine local plaintextstorage with a transparent approach and implicit trust in the service provider,there seems to be little security advantage over opportunistic encryption of emailor a “secure web dropbox”.Key continuity management (KCM) (Gutmann, 2004), based on imprint-ing (Stajano and Anderson, 2000) or trust-on-first-use (as in SSH), are furtheroptions: we implicitly trust the first contact and only warn if credentials changeunexpectedly. Garfinkel and Miller (2005) experimented with S/MIME, Out-look Express and KCM, and concluded that “KCM is more secure than today’salternative to KCM: no cryptographic protection at all” but also “it is notthe panacea to the mail security problem for which we are looking”. Relatedattempts include STEED (Koch and Brinkmann, 2011), which argues for end-to-end encryption and (similar to earlier points) trust-on-first-use. STEED alsoincludes further attempts to make key management easier: automatic key gen-eration and key distribution via DNS.
User story 7 — Install or upgrade software
Alice installs some softwarefrom the Internet. How can she be sure that it is free of malware and fromthe correct publisher?The primary objective here is to ensure the integrity of the system as awhole. Once installed, operating systems typically receive updates over theirlifespan, for example Microsoft (2007) and Debian (2010). Application softwareis initially installed and subsequently updated. In all these cases, the intent is toensure that the “correct” software is installed or updated, in the sense it shouldbe “approved” or at least “certified” by someone responsible. The simplest caseis that the original publisher or developer has made the updates available, butthere is a an obvious competitive argument in favour of third parties makingplugins, updates, etc. , available. Typical examples include drivers and updateson Microsoft Windows and package signing in the Linux distributors, e.g. , De-bian’s checking of signatures via apt (Joey Hess and others, 2006).We make much use of “scare” quotes in the previous paragraph: the exactpurpose or value of the software can vary between stakeholders. For example,some vendors may wish to restrict the platform so that only software they ap-prove is installed (perhaps for control of a “marketplace”), or to limit potentiallybad interactions of packages. 17he risks are obvious: malware can masquerade as “genuine” software, andthus we make the reasonable leap to cryptographically signing software. Weobserve that some security incidents in own institutions are due to attempts toinstall software of relatively dubious origin.From observation of users, we see two well-known issues: • As with the web and email examples, users disregard warnings becausethey obstruct the user’s intention: to install some software.Note that we do not concern ourselves with policy issues. For exam-ple, some system administrators may wish to ensure that only particu-lar patches are installed; involuntary upgrades may break other software.Additionally, some patches are large, and may inconveniently use dispro-portionate amounts of bandwidth for roaming users. • Software signing uses public key cryptography: thus some public keyshave to be trusted. We have the usual root trust problem as described insection 6. Indeed, this scenario can be viewed as a subset of the connecting-to-a-web-server scenarios.
This set of user stories is somewhat different, and relates to applications in usein specific domains and industries — particularly those with requirements forsigning electronic documents.
User story 8 — Internal application process
An applicant, with the as-sistance of his supervisors, completes part of an application form. Twodifferent department heads need to sign off various resources and indicatetheir support, as well as obtaining certification from a finance departmentclerk. This documentation is then forwarded for a final decision to bemade.The risks are a little more subtle than some of our earlier user stories. Ifeveryone is cooperating and trustworthy, there is no problem. However,some people do attempt to defeat the checks-and-balances in such schemes:we discuss this further below.
User story 9 — Signing a form
A variation on the agreement of a con-tract: a publisher requests that Alice signs an agreement, e.g. , our mo-tivating example here is a transfer of copyright form. As in the earlierexample, this is a low risk example: the problem is to be able to provideevidence if the agreement was subsequently challenged.We have previously examined the issue of distributed non-centralised formswith requirements such as integrity and auditability in Brooke et al. (2010).One of the motivating scenarios there was our current
Internal application pro-cess (8) user story. A very specialised form of signing (certification, in this in-stance) covers court documents, as described in (Reiniger and Francoeur, 2010).18ome services provide a web-centric approach, such as Adobe’s EchoSign ser-vice (Adobe, 2013).However, we now look at the broader process in the event of a subsequentproblem, and compare with the “sign form” scenario.A non-computer approach for the latter scenario is for the publisher to postthe form to Alice, who signs it and posts it back. A more common method isto email a document, e.g. , PDF, MS Word, and ask for a signed copy to bescanned then emailed or FAXed. A final option (which the authors have seenseveral times lately) is for the publisher to offer an option of signing the PDFfile using an X.509 certificate. Again, this causes a dependence on certificateauthorities as discussed earlier.In common with the “agree a contract by email” user story, we note thatan email itself, even without any cryptographic measures, is likely to be suffi-cient as evidence. Similarly, due to legislation such as the Civil Evidence Act1995 (HMSO, 1995), the document signing scenario is relatively easy: we wouldassert that the computer system is functioning correctly and the existence ofthe records would be sufficient. One party would have to actively dispute thevalidity of the assertion. Of course, cross-border issues complicate this, but allagreements we have seen include choice of law clauses, thus mitigating this issue.Interestingly, we can raise difficulties with demonstrating that a signatoryhas seen and understood the terms of an agreement. Click-through agreementsare believed to be enforceable (Mason, 2011) although particular clauses maynot be.
Complications concern the splicing of documents, whether intended to subvertorganisational controls, or simply to expedite a process.We assert that people are essentially trusting. Consider again the
Internalapplication process (8) user story above; a more specific instantiation is the ap-proval of a course of training within an organisation (based directly on a realsystem). The process itself is relatively involved, but a major problem in termsof audit and good governance concerns the signing of these forms. In a purelypaper process, a single document should be signed by all the parties (applicant,supervisors, department heads and finance clerk). But the difficulty of obtain-ing all these signatures with an increasingly mobile workforce often results inmultiple signature pages being submitted for a given document. Worse, thereis no guarantee that the signatures are attached to the correct version of thedocument: multipage documents can be easily spliced together.The next step is to consider how these documents are handled when emailsbecome involved. The committee that makes the final decision on these doc-uments now routinely sees a word-processed form, with some signatures onprinted pages and some printouts of emails from various principals assertingtheir support of the application.In both the purely paper process and the process involving emails thereis trust that no one is actually trying to defeat the system by presenting an19pplication with putative signatures that are in some sense false. The emails arebeing sent within the same organisation, using the same central mail service, andthus those relying on the veracity of these emails and hand-written signaturestrust the system as a whole. Essentially, we assert that there is no demand intypical domestic or business processes for cryptographic assurance of emails .Interestingly, this appears to be backed by the experiences with qualifiedcertificates ( e.g. , (Krawczyk, 2010), referred to in Section 2): there is simplyno real market for them outside of very specialised demands. This is likely tocontinue while there is no statutory requirement to use an advanced electronicsignature or qualified certificate, since the existing legal framework accepts thename on the email as being sufficient replacement for a hand-written signature.So a simple email is sufficient for authorisation and implicitly, also for auditpurposes, but is not what many in the information security sector would viewas sufficient for integrity.
10 Disk and file encryption
User story 10 — Data confidentiality
Alice has some data on a laptopcomputer that is the subject of data protection obligations. Laptops canbe lost or stolen relatively easily, thus the risk is medium or high.Disk and file encryption is purely about confidentiality, with some largeexamples described by Lane (2009). We suggest that this is the simplest of ourselection of problems. Essentially, media can be lost: making it hard for therecords on that media to be (ab)used is an obligation in most data processingscenarios.Examples are easy to find in the media; we are aware of local cases, e.g. , in-volving sensitive medical records. The Information Commissioner’s Office has arange of press releases (Information Commissioner’s Office, 2011) detailing someof these incidents. In terms of risk assessment, we suggest that this is one of themost significant risks facing most organisations. Whereas we argue in our earlierstories that the integrity of information is relatively rarely challenged, there isa high likelihood of accidental loss of storage media and computing equipment;similarly the type of information can range from trivial to highly compromising.We initially suggest that this should be relatively easy to manage. A range ofsoftware is available, including paid-for and free applications. Small installationscan rely on simple use of passwords, while larger organisations may use someform of enterprise management capabilities such as Symantec’s PGP WholeDisk Encryption.As usual, we observe that the practicalities are not so easy. Discussions withlocal SMEs during short (one-day) basic IT security courses demonstrate thatsome simply do not recognise the need to protect data from inappropriate dis-closure although the need for antivirus software is commonly recognised. Thosethat do sometimes suffer from choice-paralysis: how do non-experts choose asuitable piece of software? 20uilt-in options are little better: “scary” but otherwise correct dialoguesabout encryption passwords being critical deter users. The well-known costs as-sociated with managing additional software, handling keys, issues with backupand recovery, etc. , become relevant. As a final remark, we note that a small, butsignificant minority of our undergraduates found TrueCrypt’s dialogues confus-ing: these students managed to overwrite existing files when they were tryingto create new file containers.However, given the risks for most users, we argue that any reasonable diskencryption is effective, as the aim is to prevent compromise due to accidentalloss and casual thieves, albeit not effective at dealing with determined attackers.
11 Discussion
We have examined a range of common applications. We now examine fouroverarching themes in relation to user-visible applications of cryptography: • risk and value; • deployment problems; • endpoint security; and • trust problems. We have identified a range of risks in our user stories above. We can place theminto three groups: • risks best mitigated by user-visible applications of cryptography; • low risks; and • risks that are mitigated by legal, societal or other technological measures.We now take these in turn. The
Data confidentiality (10) user story is an outlier compared to our otheruser stories. It demonstrates an effective mitigation of the risk using user-visibleapplications of cryptography, although problems such as the (mis)managementof encryption keys can occur. Essentially it can convert accidental and inevitableloss of readable data on portable media into the loss of encrypted data. Thusin this case, the use of cryptography is valuable compared to the risk. Eventhen, it can be automated further by inclusion in the boot process. A diligentattempt to use encryption can form part of the management of the legal riskfrom, say, the UK’s Data Protection Act 1998.21
The risks in some of these user stories are low as the impact of a breach is low,for example in social websites and social email. The lower this risk, the lessjustification there is for the costs —time, effort and money— for user-visibleapplications of cryptography as distinct from technologies such as opportunis-tic encryption. Users perceiving a low impact, whether consciously or uncon-sciously, are unlikely to attempt to mitigate that risk.
The remaining medium or high risks can be mitigated by other means.Although we have concentrated on UK (albeit primarily English and Welshand related systems such as Canadian and Australian) law, the legislative situ-ation is similar in other jurisdictions. For example, for data protection issues,European countries have their own implementations of the 1995 European UnionData Protection Directive. Procedural and audit safeguards are often in place,particularly relevant for environments where a relatively large number of usersmay legitimately access data (such as in the health and law enforcement sectors).Notably, cryptographic signatures typically have no added legal value overother types of signatures (as described in section 2). This applies particularlyto the
Buy goods via a website (2) , Online banking (3) and
Agree a contractby email (6) user stories, and to a lesser degree, the two scenarios discussed insection 9. The mitigation in all these cases is that the parties have recourse tothe legal systems, where courts would be asked to decide if a contract existed.A simple email without a cryptographic signature may be sufficient for a court.Chapter 8 of Mason (2012) discusses issues of liability further.For financial transactions, reactive monitoring systems, as exemplified bycredit card companies, identify anomalous patterns of use which triggers out-of-band authorisation to the credit card holder. This monitoring, along withlegal guarantees limiting the risk to the card holder, can substantially reducethe risk at least to the card holder; the merchant may take on greater risk,along with the issuing bank. However, we commented earlier on the variabilityof such legal protection. The advertised guarantees to account holders andthe relatively low likelihood of any particular individual becoming a victimversus the obvious convenience of online banking can reasonably account forthe popularity of online banking. The issues with CAs and SSL simply do notpose a sufficient problem for these users to decline to use online banking andsimilar services.
There are limitations to our evaluation of security risks. The argument advancedso far is qualitative. A finer-grained analysis requires quantitative data andsuitable objective metrics, as suggested by Stolfo et al. (2011). In their work,they describe at least three kinds of adversaries22 nation state actor, • expert operator adversary, and • insider expert developer.However, evidence from reported incidents suggests that casual, opportunisticand accidental risks such as phishing and inadvertently losing storage mediashould be of greater concern to most end-users of the type we are concernedwith in this work. Moreover, in the absence of strong compartmentalisation,the technologies discussed in this paper are unlikely to deter or restrain any ofStolfo et al.’s adversaries.Moreover, the evaluation of value or impact is notoriously dependent on theviewpoint of individual stakeholders. For example, Schneier refers to external-ities (Schneier, 2007) and Ackerman et al. (1999) discuss the varying value ofdifferent types of information according to individual preferences. The previous section has illustrated the value or lack thereof of user-visibleapplications of cryptography for mitigating the risks in our user stories. Wenow examine how these mitigations are sometimes undermined in practice.There are two ways this undermining occurs: • through lack of individual and organisational awareness for the need forcryptography; and • (mis)use of that cryptography.The former point applies to both the selection and implementation of suitablesystems by organisations, as well as actual use by individual end-users. Forexample, organisations may not recognise the need for encryption of sensitivedata. Even if an organisation does recognise this need, individual end-usersmay not recognise it, or may disregard it for other reasons. Thus organisationspromulgate their need via policies and procedures. Other authors have alsocommented on awareness, as well as technological and regulatory issues. Forexample, Srivastava (2009) considers the Australian environment and remarks“there is significant evidence of Australian businesses’ lack of awareness andunderstanding of electronic signatures and the associated legislation, despite aregulatory framework to facilitate their use”.The correlation between risk and the awareness of need for cryptography isunclear. This is illustrated in our user stories. For example, in the lowest riskuser stories such as social email, we would not expect any awareness of need.At the other end of the scale such as online banking, awareness should be highin part due to media coverage. Interaction with users suggests this is the case.However, other user stories are problematic. The risks with handling removablemedia are often not recognised, as evidenced in the UK by reports from theInformation Commissioner’s Office (Information Commissioner’s Office, 2011).23dditionally, inadvisable software installation is implicated in malware infec-tions on end-user computers. There is little surprising in terms of awarenesshere, and it remains an open question how end-users can be made aware. Au-tomation (discussed shortly) remains the most obvious option.Mitigation of risk using cryptography is also undermined by intentionalor accidental misuse. Of particular interest is the usability of these tools.We have previously asserted that users of interest to us have basic computerskills, e.g. , word processing and email. Requiring them to directly operatecryptographic software poses a substantial problem when considering usabil-ity (Whitten and Tygar, 2005). The problems continue, and anecdotal evidenceis in good supply. For example, Roger Grimes says“Case in point: I routinely use Pretty Good Privacy (PGP) andSMIME to secure e-mails and file transfers. Yet frequently, evensomewhat knowledgeable IT security people get confused about whichkeys to use when. In order to for someone to send me encrypted con-tent, I need to send that person my public key. Similarly, I need therecipient’s public key so that I can send him or her encrypted con-tent. We should never share private keys. That’s why they are calledprivate. Pretty simple — or so you would think. More often thannot, if the person isn’t overly familiar with PGP/SMIME, even ifthey’ve been using it, they send me their private key.“Being the good citizen that I am, I delete their private key andask again for their public key, explaining that with their privatekey, I could be them, for all digital purposes. About half the newlyeducated group then sends back my public key back or, if they’reusing PGP, their private key ring, which contains all their privatekeys. You might think that I’m making this stuff up, but it’s prettymuch been this way with PKI and PGP exchanges since they wereinvented. PGP’s own Phil Zimmerman has often written on thissubject.” (Grimes, 2009)Simplicity of use is obviously beneficial. We might reasonably consider that user-visible applications of cryptography have an inherent requirement for user effort,and that deployment involves consideration of training requirements. However,our end-users are trying to achieve relatively simple tasks; the computers area means to an end and our user might view the computer as nothing morethan a tool like a washing machine. Thus our argument is that in the scenarioswe consider, asking for any significant user effort to understand and correctlyuse these cryptographic features is unreasonable and likely to result in non-conformance and inadvertent misuse. The problem is not limited to the tools alone. General issues of security hygienearise such as leaving computers unlocked in vulnerable environments; indeed,24ome organisations such as universities and Internet cafes disable the screenlocks to prevent monopolisation of shared computers. Poor password practice iscommon (Weber et al., 2008); a typical scenario is demonstrated by techniciansas illustrated by “Ted”, a technician who had arrived to update some softwarefor user “Alice” who had gone to speak to a colleague on the other side of theiropen-plan workspace.Ted (shouting across the room) “Alice, what’s your password?”Alice (shouting back) “It’s (her real password).”This particular workplace dealt with medically-sensitive information, and all theworkers were (at least in theory) aware of the need to control this information.This same workplace, despite using a relatively modern mail server, insistedthat the only way for one user to access the email of another user while thatuser was on long-term leave was to ask the absent user for her password. Themore appropriate mechanism involved auditable delegation via the mail server.Thus the endpoints, the computers our end-users are using, are a significantweak link. Data is accessible on these machines, and some store CMS keys withno further protection. Gene Spafford is quoted as saying“Using encryption on the Internet is the equivalent of arranging anarmored car to deliver credit-card information from someone livingin a cardboard box to someone living on a park bench.”The changing consumer computer environment produces challenges: desk-top machines are relatively well-understood, but the use of mobile deviceswith operating systems such as iOS and Android pose additional challenges.They are easier to steal, and in some cases, have a more limited access controlmodel (Hayashi et al., 2012).Some attempts to secure endpoints address the difficulty of handling many,good quality passwords. For example, IDSpace proposes a single user interfacethat supports a range of existing identity management technologies (Al-Sinani and Mitchell,2011). More ambitious is Stajano’s Pico (Stajano, 2011). This involves a pro-posed hardware device that takes over the role of authentication. Of course,this brings issues of authenticating the user to the Pico; the use of a swarm ofpicosiblings (using k out of n secret sharing) and biometrics is suggested. A verybroad discussion of proposals for replacing passwords is due to Bonneau et al.(2012).Lastly, reidentification and recovery issues provide a potential weak point inmany deployed systems. Forcing password recovery via some means an attackercontrols is well known.The problem of endpoint security relates back to issues of liability, and thelegal and contractual context of these transactions. ENISA states that“many online banking systems dangerously rely on PCs being se-cure, but banks should instead presume all customer PCs are in-fected.” (ENISA, 2012)On the same theme, Krebs comments 25No online banking authentication system works unless it starts withthe premise that the customer’s machine is already compromisedby malware that gives thieves complete control over the customersystem. But for better or worse, the commercial banks have no(dis)incentive to do much to improve the integrity of online bankingtransactions because the current regulations effectively hold themblameless when a customer loses money.” (Krebs, 2010)The latter point is a significant point of variation. Recovery of funds lost to crim-inal activity vary amongst different jurisdictions. Even where regulations appar-ently are in the consumers’ favour, the reality can be different (Anderson and Bohm,2008; Mason and Bohm, 2011). So far, we have discussed the value of user-visible applications of cryptographyfor the mitigation of risks in our user stories. We now take a more holistic viewand examine trust.We place trust in cryptographic protocols which are believed to be sound,and their implementation due to testing, review, credibility, etc.
But for mostof our user stories, data confidentiality being an exception, these depend ontrusted third parties (TTPs): the certificate authorities. These TTPs are usedto bootstrap trust when there is no prior relationship between the first andsecond parties. The users trust the CA to check the identity of the serviceprovider and correctly link the offered X.509 certificate to that identity.However, we have seen that this trust may be misplaced: in section 6 wenoted issues with bootstrapping trust relationships (Perlman, 1999) and high-lighted the Comodo and more recent compromises (InfoSecurity, 2011). In sec-tion 7, we remarked that some of these issues can be mitigated by the localnetwork infrastructure, e.g. , to allow opportunistic encryption, if that local in-frastructure is trusted.There are alternatives to a naming scheme that is world-wide ( i.e. , requestinga certificate from a well-known CA); one is to use local, closed CAs. A furtheroption is web of trust style keying. These have been discussed at some lengthearlier.Others attempt to fix the existing CA environment include “pinning” whichwhitelists public keys that are expected by a particular browser to make it harderfor untrustworthy certificate chains to go undetected. More interesting is the useof multiple notaries as illustrated in the
Perspectives project (Wendlandt et al.,2008) and the subsequent Convergence add-on/daemon (Convergence, 2011):both have users selecting notaries that they trust rather than relying on the de-fault root CAs provided in (say) a web browser. However our earlier argumentssuggest that casual users will not be willing to engage in any additional workto choose their trust relationships as there is no real improvement in their situ-ation given the user effort required; this broadly matches with Herley’s conclu-sions (Herley, 2009). A broader discussion about how trust operates in societies26s given by Schneier (2012).Despite the issues with CAs described earlier, we may ask why companiessuch as Verisign and Entrust amongst others can run a business selling SSLcertificates. We suggest that there are two major factors: • regulatory compliance, such as the PCI SSC Data Security Standard (PCI Security Standards Council,2010); and • the inclusion of their root certificates in major web browser installationpackages.For the relatively low cost per unit, an individual business will not need toconsider the purchase for long; yet the vendors have a wide range of potentialbuyers and this is a business that scales well.In practice, the TTP infrastructure might not be trustworthy but users useit anyway, and when warning dialogues appear they are disregarded. Someprotocols, e.g. , SSH, can record the known host keys and warn when it changesin the style of key continuity management (discussed earlier in section 7.3);similarly, we observe that many users disregard this and continue with theirconnection. These points relate to the awareness and education issues previouslyhighlighted.
12 Conclusions
We have examined user-visible applications of cryptography. In part, our anal-ysis has been structured by ten user stories in the context of the UK regulatoryenvironment. None of these are what would be classically considered “criticalsystems”. Instead, they are routine, day-to-day scenarios. These user sto-ries have been addressed by scenario-based analysis and we have examined thebalance of risk and value of user-visible applications of cryptography, and thesubsidiary deployment and trust issues.We see that despite the apparent problems, particularly those associatedwith deployment, endpoint security and trust (especially of CAs), the deployedsystems work relatively well. Our survey suggests that this is due to the presenceof mitigating factors, such as guarantees to bank account holders and recourseto the legal system.We return to the three categories of user-visible applications of cryptographyfrom section 5.2.1. Direct, elective invocation of a cryptographic tool is very rare in the userpopulations.2. Indirect but still explicit, elective use of cryptography is valuable in the
Data confidentiality (10) user story, but does not appear elsewhere.3. Implicit, background use of cryptography accounts for most of the usage.Even where problems occur, we argue that users ignore or otherwise accept27he risks (similar to Herley (2009)’s argument) or are content for othermitigations to operate.Any application of user-visible cryptography must make sense in that partic-ular context. Our survey illustrates that the social/legal framework often doesnot demand any cryptographic mechanism, or that users, software and/or theinfrastructure compromise its effectiveness. A significant exception is the use ofencryption to ensure data confidentiality, particularly for removable media. Ingeneral, the potential security issues are essentially peripheral to the user’s con-cerns; the users are trying to achieve some other objective using the computeras a tool.In the work leading to this survey, we see three particularly relevant areasfor further work: • metrics to objectively establish quantitative measures for the value of user-visible applications of cryptography in these types of user story; • usability and education issues, as discussed by other authors in section 3;and • the balance of automation and control, which we discuss next.We have remarked that the endpoint computing devices are a significantvulnerability. We suggest the following definitions: Automation
This relates to the computer making decisions with minimal, ifany, user intervention, and incorporates elements such as robustness. Forexample, if the user is asked to handle a failed verification that could bedue to network problems, incompatibilities of cryptography, inadvertentlymodified files or malicious attack, this is a lack of automation.
Control
This is the ability of a principal to dictate the usage of a computingdevice, access to information, and may include some authority or opinionover trust models and trust roots.A locked-down computer, where the end-user cannot install anything thatis not approved by the original vendor could be viewed as overly paternal-istic, but could, if the vendor has suitable judgement, increase the possibleautomation in terms of certificate authorities. These are more computerappliances than general purpose computers. Of course, this approach isanathema to the free/libre open source software (FLOSS) community.We remark firstly that automation and control are not necessarily opposing,although there is an obvious tension. Moreover, they are potentially differentfor each stakeholder, e.g. , computer user vs. system administration vs. soft-ware publisher vs. software developer. Both automation and control need to bebalanced against the overall risk.Earlier comments about usability lead us to conclude that one partial mit-igation is for software to require proactive changes of security settings rather28han reactive changes. As an example, consider access to a website using an SSLcertificate that has not been suitably signed. A reactive approach allows theuser to add exceptions. Instead, the software could simply refuse access as anextreme level of automation. The proactive approach requires that the user takedeliberate action unprompted by access to the web server: this gives the usersome control, but with reduced compromise of automation. Essentially, this isto make it harder for users to say “I don’t care, just let me access the web site”.A multitude of controls and fine-grained options has no value if the user willclick “okay, get on with it” no matter what; perhaps heavy automation withstrong controls is a suitable amelioration. This deserves further user-focusedstudy, perhaps by systematic repetition of our scenarios but examining the in-teractions of particular interfaces more closely. A practical implication is thatsimplified, constrained applications may be a short-term compromise that re-duces the risks while still allowing sufficient utility.
Acknowledgements
This paper has been through several revisions over the past two or so years. Wethank the various anonymous reviewers who have made valuable comments onthis work.
References
Ackerman, M. S. , Cranor, L. F. , and Reagle, J. Proceedingsof the 1st ACM conference on Electronic commerce . EC ’99.
Adobe . 2013. EchoSign. , last checked8th March 2013.
Al-Sinani, H. S. and Mitchell, C. J.
Proc. TRUST 2012 . LNCS, vol. 7163. 49–74.
Anderson, R. and Bohm, N. , last checked 19th July 2012.
Anderson, R. J.
Security Engineering: A Guide to Building DependableDistributed Systems , 2nd ed. Wiley.
BILETA . 2011. Response to
Digital Agenda for Europe :Electronic identification, authentication and signatures inthe European digital single market public consultation. http://ec.europa.eu/information_society/policy/esignature/docs/pub_cons/offline_contrib/bileta.pdf . BitDefender . 2010. Bitdefender finds exposed social me-dia credentials often provide access to email accounts.29 ,last checked 29 July 2011.
Blanchette, J.-F.
Burdens of Proof: Cryptographic Culture and Evi-dence Law in the Age of Electronic Documents . MIT Press.
B¨ohme, R. and Grossklags, J.
Proc. New Security Paradigms Workshop . Boneh, D. and Franklin, M.
SIAM J. of Computing 32,
3, 586–615.
Bonneau, J. , Herley, C. , Oorschot, P. C. v. , and Stajano, F. British Standards Institute . 2008. Evidential weight and legal admissibilityof electronic information — specification. BS 10008:2008.
Brooke, P. J. , Paige, R. F. , and Power, C. Software Practice & Ex-perience 40,
8, 655–672.
Callas, J. , Donnerhacke, L. , Finney, H. , Shaw, D. , and Thayer, R. http://tools.ietf.org/html/rfc4880 . Carroll, J. , Rosson, M. , Chin, G., J. , and Koenemann, J. Software Engineering, IEEETransactions on 24,
12 (Dec), 1156–1170.
Cisco . 2011. Cisco IronPort Email Security Appliances. , lastchecked 29th July 2011.
Cloud Computing Security Working Group . 2011. Cloudcomputing security considerations and recommendations — us-age scenario: Software as a service (SaaS) electronic mail. ,last checked 27th July 2011.
Cocks, C.
Cryptography and Coding , B. Honary, Ed. LNCS, vol. 2260.Springer-Verlag, 360–363.
Convergence . 2011. Convergence. http://convergence.io/ , last checked24th July 2012. 30 ooper, D. , Santesson, S. , Farrel, S. , Boeyen, S. , Hous-ley, R. , and Polk, W. http://tools.ietf.org/html/rfc5280 . Corbet, J. ∗ .google.com certificate issued. http://lwn.net/Articles/456798/ , last checked 8th September 2011. Cranor, L. F. and Garfinkel, S. , Eds. 2005.
Security and Usability: De-signing Secure Systems that People Can Use . O’Reilly.
Cronin, M. J. , Ed. 1998.
Banking and finance on the Internet . Wiley.
Debian . 2010. Debian FAQ: Keeping your Debian system up-to-date. , lastchecked 26th July 2011.
Ellison, C. http://tools.ietf.org/html/rfc2692 . Ellison, C. , Frantz, B. , Lampson, B. , Rivest, R. , Thomas, B. , and Ylo-nen, T. http://tools.ietf.org/html/rfc . Ellison, C. M. http://world.std.com/~cme/html/spki.html , last checked 29th July2011.
ENISA . 2012. EU cyber security agency ENISA; “highroller” online bank robberies reveal security gaps. ,last checked 19th July 2012.
European Commission . 2011. European legislation on eSignature. http://ec.europa.eu/information_society/policy/esignature/eu_legislation/index_en.htm ,last checked 20th July 2011.
European Parliament and Council . 1999. Directive 1999/93/ecof the European Parliament and of the Council of 13 decem-ber 1999 on a community framework for electronic signatures. http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:31999L0093:en:NOT ,last checked 20th July 2011.
Federal Evidence Review . 2012. Rule 803. Exceptions to the rule againsthearsay — regardless of whether the declarant is available as a witness. http://federalevidence.com/rules-of-evidence , last checked9th March 2012.
Friedman, B. , Hurley, D. , Howe, D. C. , Felten, E. , and Nissenbaum,H. Proc. CHI 2002 . 31 urnell, S.
Computers& Security
26, 434–443.
Furnell, S. , Jusoh, A. , and Katsabas, D. Computers & Security
Garfinkel, S. L.
IEEE Security & Privacy 1,
Garfinkel, S. L.
Proc. Digital Government Research . Garfinkel, S. L. and Miller, R. C.
Proc. SymposiumOn Usable Privacy and Security . Gaw, S. , Felten, E. W. , and Fernandez-Kelly, P. Proc. CHI 2006 . Gentry, C.
Proceedings of the 22nd international conference on Theory andapplications of cryptographic techniques . EUROCRYPT’03. Springer-Verlag,272–293.
Goecks, J. , Edwards, W. K. , and Mynatt, E. D. Proc. Symposium on Usable Privacy and Security (SOUPS) . Grimes, R. ,last checked 29th July 2011.
Gross, J. B. and Robson, M. B.
Proc. Symposium on Usable Privacy and Security(SOUPS) . Gross, J. B. and Rosson, M. B.
Proc. CHIMIT’07 . Gutmann, P.
Proc.12th USENIX Security Symposium . 45–58.
Gutmann, P. . AusCERTconference slides.
Gutmann, P. , lastchecked 13th July 2011. 32 utmann, P. and Grigg, I.
IEEE Security &Privacy , 56–58.
Hayashi, E. , Riva, O. , Strauss, K. , Brush, A. B. , and Schechter, S. Proc. Symposium on Usable Privacy andSecurity (SOUPS) . Herley, C.
Proc. New Security ParadigmsWorkshop . HMSO . 1995. Civil Evidence Act. (c.38).
HMSO . 1998. Data Protection Act. (c.29).
HMSO . 2000. Electronic Communications Act. (c.7).
HMSO . 2002. The Electronic Signatures Regulations. (no.318).
Ho, J. T. , Dearman, D. , and Truong, K. N. Proc. Symposium on Usable Privacyand Security (SOUPS) . Housley, R. http://tools.ietf.org/html/rfc5652 . Howes, P.
Evidential weight and legal admissibility of linking electronicidentity to documents: Code of practice for the implementation of BS 10008 ,4th ed. Number BIP 0008-3. British Standards Institution.
Hushmail . 2010. Using Java with Hushmail. https://help.hushmail.com/entries/245155-using-java-with-hushmail ,last checked 19th July 2011.
Hushmail . 2011. How Hushmail can protect you. , last checked19th July 2011.
Ibrahim, T. , Furnell, S. M. , Papadaki, M. , and Clarke, N. L. Proc. TrustBus , S. Kat-sikas, J. Lopez, and M. Soriano, Eds. LNCS, vol. 6264. 177–189.
Information Commissioner’s Office . 2011. Latest news releases. , last checked 29th July2011.
Information Commissioner’s Office . 2012. Cookies. ,last checked 23rd July 2012. 33 nfoSecurity . 2011. Comodo certificate compromise has iranian fingerprints. ,last checked 29th July 2011.
Jackson, C. , Simon, D. R. , Tan, D. S. , and Barth, A. Proc. UsableSecurity . Joey Hess and others . 2006. SecureApt: All about secure apt. http://wiki.debian.org/SecureApt , last checked 26th July 2011.
Kapadia, A.
IEEE Security & Privacy 5,
Kirlappos, I. , Sasse, M. A. , and Harvey, N. Proc. TRUST 2012 .LNCS, vol. 7344. 308–324.
Koch, W. and Brinkmann, M.
Krawczyk, P.
Digital Evidence & Elec. Signature L. Rev 7 . Krebs, B. http://krebsonsecurity.com/2010/06/e-banking-bandits-stole-465000-from-calif-escrow-firm/ ,last checked 19th July 2012.
Kuchera, B. http://arstechnica.com/gaming/news/2011/06/lawsuit-sony-laid-off-security-staff-was-unprepared-for-ps3-hacks.ars ,last checked 28th July 2011.
Lane, A. http://securosis.com/blog/database-encryption-part-6-use-cases ,last checked 26th July 2011.
Lee, B. , Boyd, C. , Dawson, E. , Kim, K. , Yang, J. , and Yoo, S. Proceedings of the secondworkshop on Australasian information security, Data Mining and Web In-telligence, and Software Internationalisation . ACSW Frontiers ’04, vol. 32.Australian Computer Society, Inc., 69–74.
Likarish, P. , Jung, E. , Dunbar, D. , Hansen, T. E. , and Hourcade, J. P. Proc. IEEE InternationalConference on Communications . Martin, L.
IEEE Security &Privacy 4, ason, S. , last checked 20th July2011.
Mason, S.
Electronic Signatures in Law , 3rd ed. Cambridge UniversityPress.
Mason, S. and Bohm, N. ,last checked 19th July 2012.
Mathewson, N. and Dingledine, R.
Proc. FC 2004 , A. Juels, Ed. LNCS, vol. 3110.227–232.
McQueen, M.
Proc. IECON . Tutorial slides.
Microsoft . 2007. How to keep your Windows computer up-to-date. http://support.microsoft.com/kb/311047 , last checked 26th July 2011.
Monitor . 2008. Electronic signature policy. . Mozilla Security Blog . 2011. Comodo certificate issue – follow up. http://blog.mozilla.com/security/2011/03/25/comodo-certificate-issue-follow-up/ ,last checked 29th July 2011.
Myers, M. , Ankney, R. , Malpani, A. , Galperni, S. , and Adams, C. http://tools.ietf.org/html/rfc2560 . National Institute of Standards and Technology . 2005. Integrat-ing it security into the capital planning and investment control process. http://csrc.nist.gov/publications/nistpubs/800-65/SP-800-65-Final.pdf . Parno, B. , McCune, J. M. , and Perrig, A. Proc. IEEE Symposium on Security and Privacy . PCI Security Standards Council . 2010. Data security standard. ,last checked 24th July 2012.
Perlman, R.
IEEE Network . Reid, R. C. , Platt, R. G. , and Wei, J. Proc. Information Security CurriculumDevelopment Conference . 60–65. 35 einiger, T. and Francoeur, J. R.
Digital Evidence & Elec. Signature L. Rev 7 . Rosson, M. B. and Carroll, J.
The Human-Computer Interaction Handbook . Lawrence Earlbaum Associates, Chapter 53,1032–1050.
Schneier, B.
Applied Cryptography , 2nd ed. Wiley.
Schneier, B. ,last checked 23rd August 2011.
Schneier, B.
Liars and Outliers: Enabling the Trust That Society Needsto Thrive . Wiley.
Shamir, A.
Advances in Cryptology , G. Blakley and D. Chaum, Eds. LNCS, vol. 196.Springer-Verlag, 47–53.
Shipman, A.
Evidential weight and legal admissibility of informationstored electronically: Code of practice for the implementation of BS 10008 ,4h ed. Number BIP 0008-1. British Standards Institution.
Shipman, A. and Howes, P.
Evidential weight and legal admissibility ofinformation transferred electronically: Code of practice for the implementa-tion of BS 10008 , 4th ed. Number BIP 0008-2. British Standards Institution.
Shultze, S. https://freedom-to-tinker.com/blog/sjs/firefox-changes-its-https-user-interface-again/ ,last checked 26th July 2012.
Singel, R. ,last checked 29th July 2011.
SOCIALNETS . 2009. Deliverable D4.1: Bar-riers and opportunities for social networks. , lastchecked 26th July 2011.
Srivastava, A.
Digital Evidence & Elec. Signature L. Rev 6 . Stajano, F.
Proc. Security ProtocolsWorkshop 2011 . LNCS, vol. 7114. 49–81.
Stajano, F. and Anderson, R.
Proc. Security Protocols 7th Interna-tional Workshop . LNCS, vol. 1796. 172–182.36 tolfo, S. , Bellovin, S. M. , and Evans, D. IEEESecurity & Privacy , 60–65.
Straub, T. and Baier, H.
Proc. EuroPKI . Sweikata, M. , Watson, G. , and Frank, C. Proc. Information Security Curriculum Develop-ment Conference . 55–59.
Weber, J. , Guster, D. , and Safonov, P. Journal of Information TechnologyManagement 19, Wendlandt, D. , Andersen, D. G. , and Perrig, A. Proc. USENIX Annual Technical Conference . Whitten, A. and Tygar, J. D.
Security and Usability: Designing Secure Systemsthat People Can Use , L. F. Cranor and S. Garfinkel, Eds. O’Reilly, 669–692.
Zurko, M. E. and Simon, R. T.