Trustworthy and Privacy-Aware Sensing for Internet of Things
TT RUSTWORTHY AND P RIVACY -A WARE S ENSINGFOR I NTERNET OF T HINGS
A P
REPRINT
Ihtesham Haider
Institute of Networked and Embedded SystemsAlpen-Adria-Universität, Klagenfurt Austria [email protected]
Bernhard Rinner
Institute of Networked and Embedded SystemsAlpen-Adria-Universität, Klagenfurt Austria [email protected]
August 28, 2018 A BSTRACT
The Internet of Things (IoT) is considered as the key enabling technology for smart services. Securityand privacy are particularly open challenges for IoT applications due to the widespread use ofcommodity devices. This work introduces two hardware-based lightweight security mechanismsto ensure sensed data trustworthiness (i.e., sensed data protection and sensor node protection) andusage privacy of the sensors (i.e., privacy-aware reporting of the sensed data) for centralized anddecentralized IoT applications. Physically unclonable functions (PUF) form the basis of both proposedmechanisms. To demonstrate the feasibility of our PUF-based approach, we have implementedand evaluated PUFs on three platforms (Atmel 8-bit MCU, ARM Cortex M4 32 bit MCU, andZynq7010 SoC) with varying complexities. We have also implemented our trusted sensing and privacy-aware reporting scheme (for centralized applications) and secure node scheme (for decentralizedapplications) on a visual sensor node comprising an OV5642 image sensor and a Zynq7010 SoC. Ourexperimental evaluation shows a low overhead wrt. latency, storage, hardware, and communicationincurred by our security mechanisms.
The future smart world involves a living where people will be automatically and collaboratively served by smart devicesand smart spaces interconnected via the Internet of Things (IoT). IoT applications collect data from various data sources.This data is used for intelligence extraction using machine learning models deployed on cloud and/or edge computinginfrastructure. The actionable insight obtained from the intelligence extraction is offered as service to end users but alsoprovides resource efficiency, data knowledge and automated decision-making processes to enterprises.Numerous smart services are being conceptualized, researched, prototyped, tested and commercially used today. Forinstance, smartphones embedded with rich sensing capabilities have enabled navigation Farrell and Barth [1999],m-commerce Fueled, natural-disaster detection and warning systems Clayton et al. [2012], environmental monitor-ing Carrapetta et al., and citizen journalism Das et al. [2010]. With wearable health sensors it is now possible to monitorthe blood sugar level and heart pace Schoenfeld et al. [2004], provide assisted living for elderly patients with chronicdiseases eCAALYX [2015], and document daily sports activities of individuals Denning et al. [2009]. Smart vehicleshave enabled autonomous driving Tesla, cooperative collision avoidance Hafner et al. [2013], remote wireless diagnosisof vehicles Lightner et al. [2003], and traffic flow optimization Varaiya [1993]. Likewise, smart homes Cook et al.[2003], and safe cities Ballesteros et al. [2012] are enabling smart spaces that are intelligent, resource efficient, andsecure.The physical infrastructure of today’s IoT applications, as depicted in Fig. 1, can be divided into three tiers: data source,edge computation, and cloud computation. The data source tier includes everything that generates data.
Sensors are thelargest and the most common source of data in IoT applications. Other sources include RFIDs, machine logs, socialmedia feeds and event sources. The edge computing tier comprises host devices and micro data centers , which areresponsible for running data processing pipelines, handling network switching, routing, load balancing and security. a r X i v : . [ c s . CR ] A ug PREPRINT - A
UGUST
28, 2018Figure 1: A generic infrastructure for IoT applicationsA host device can either be a commodity device with computing, storage and communication resources such as asmartphone, a computer, a gateway router, a smart vehicle ECU or a processing platform solely dedicated to theattached data sources. Micro data centers host virtualization infrastructure which runs cloud services closer to the datasources. These data centers are distributively located, for example at cellular base-station sites. The cloud computingtier comprises a centralized pool of computing, storage and communication resources, which offers data management,analytics, software or hardware platform or combination of these as-a-Service (aaS).The infrastructure of Fig. 1 encompasses three layers of abstraction: technology, middleware, and application layers At-zori et al. [2010]. The technology layer is comprised of sensing, identification, computing and communication resources.The middleware is a software layer or a set of sub-layers, that resides between the technology and the applicationlayer. The middleware hides the technology-level details from the application programmers thereby simplifying theapplication development process. The application layer is the top most layer exporting all the system’s functionality tothe end-users by exploiting the functionality of the middleware, standard web interfaces, and protocols.
This work addresses two security threats for IoT applications: sensed data pollution and personal privacy leakage.Sensed data pollution is a major threat posed to IoT applications whereby malicious users or a third-party adversariescontribute manipulated or fabricated sensed data to pollute the application database Saroiu and Wolman [2010]. Anadversary can exploit a number of security vulnerabilities present in the infrastructure of Fig. 1 to mount this attack. First,the data source layer is mainly comprised of commodity devices embedded with a multitude of resource constrainedsensors connected to a host processor. Sensitive data captured by these sensors do not carry any security guarantees.These sensors rely on a resource-rich host device for processing and reporting the sensed data to a micro data center (ora cloud server). Host devices are the commodity devices running a thick, vulnerable software stack G-DATA, Saroiuand Wolman [2010]. As a result, today it is trivial to manipulate or fabricate sensors’ readings in these applicationsby exploiting bugs (e.g.,
Master Key eSecurity Planet and
Fake ID
Security) in the software stack (e.g., OS) runningon the commodity devices. For instance, location readings from a smartphone GPS sensor can be modified to obtainillegitimate access to a location-based service Liu et al. [2012], video frames from a surveillance camera can bemanipulated to hide or fake an event Winkler and Rinner [2014], and patient’s blood sugar level measured by wearablesensors can be manipulated to stop the insulin pump Liu et al. [2012]. Second, the use of public infrastructure forcommunications (Internet) and storage (public cloud storage servers) further increases the threat surface area of theseapplications. Third, due to the open and ubiquitous nature of the infrastructure, certain elements such as sensors maynot be protected against physical attacks. Physically damaged sensors are another potential source of data pollutionattacks. Consequently, any service based on this data lacks trust.2
PREPRINT - A
UGUST
28, 2018The second threat addressed by this work is leakage of personal privacy. The IoT applications collect and processinformation from almost every aspect of our daily lives, e.g., our private data (photos, medical reports), our routines,habits and preferences (transportation, shopping, political and religious views), our critical infrastructures (energy,emergency systems). By linking individual data points obtained from wearable, personal devices (e.g., smartphones orcars), and private spaces (e.g., home security, assisted living or baby monitoring applications) one can construct thepersonal profiles of the individuals revealing their sensitive personal information such as home and workplace, contactdetails, health-status, religious orientation, political affiliations, current and future locations etc.
The goals of this work are twofold: First, we identify and apply security mechanisms to ensure effective and verifiabletrustworthiness of the sensed data, collected from vulnerable commodity devices in open networks such as IoT. Weadopt the definition of trustworthy sensed data by Liu et al. Liu et al. [2012] (i.e., the data carrying integrity, authenticity,and freshness guarantees) and ensure the trustworthiness of sensed data in the IoT applications. Second, certain IoTapplications may require these sensors to capture sensitive personal information about the individuals. We incorporateby design personal privacy protection mechanisms which allows individuals to submit sensed data in a privacy-awaremanner.The diverse nature of the IoT applications impose varying requirements on the data source tier. We categorize theseapplications into two groups: (i) applications attributed by centralized processing, i.e., in these applications, raw senseddata is collected at the server side (micro data center or cloud sever) for processing and (ii) applications attributed bydistributed processing, i.e., sensed data is processed locally on the sensor nodes.This manuscript extends our preliminary work on trusted sensing Haider and Rinner [2017a,b] and comprehensivelyintroduces the concepts of trusted sensing and secure camera nodes in a holistic IoT setting. In particular, we first expandtrusted sensing Haider and Rinner [2017a] by addressing the personal privacy leakage caused by the incorporation oftrusted sensors into smart devices such as smart phones. Second, we extend secure camera nodes Haider and Rinner[2017b] for visual monitoring applications by exploring various PUF sources as root of trust for secure sensor nodeimplementation. Furthermore, both concepts are evaluated using real word application scenarios. Overall, the maincontributions of this work can be summarized as follows: • First, we present a trusted sensing concept for centralized IoT applications. This concept exploits lightweightsecurity circuits called physically unclonable functions (PUFs) to extract a unique CMOS fingerprint of thesensor. The fingerprint in combination with lightweight security mechanisms ensure non-repudiation (i.e.,integrity, authenticity and freshness) on sensor readings. On-chip PUFs assist to detect hardware tampering ofthe sensor. • Second, for the centralized IoT applications, we perform anonymization of the sensed data from trusted sensorson the host device using non-interactive witness indistinguishable proofs to ensure privacy-aware submissionof the sensed data to the IoT applications. • Third, we present a secure node architecture for applications that require processing of sensed data locally onthe sensor nodes. The architecture, implemented as system-on-chip, derives the security keys from sensor’sPUF-based CMOS fingerprint. Integrity, authenticity, confidentiality, freshness and access authorization ofthe sensed data is protected using an encrypt-then-sign technique. Secure boot of the SoC ensures integrity,authenticity and unclonability of the node’s firmware. Hardware tampering can be detected due to the tamperevidence property of the on-chip PUF. • Fourth, we evaluate both mechanisms using two case studies. The trusted sensing and anonymization of datafrom trusted sensors are evaluated using a participatory sensing scenario where a secure node approach isevaluated using a private space monitoring scenario. A trusted image sensor and a secure camera node areimplemented using Zynq7010 SoC and OV5642 5MP image sensor as platform and latency, hardware, storageand communication overhead incurred by both the approaches is evaluated. To demonstrate the feasibility ofPUF-based approach, we also implemented and evaluated PUF on three platforms (Atmel 8-bit MCU, ARMCortex M4 and Zynq7010 SoC) of varying complexities that are ideally suited as sensing platforms for a broadrange of sensors.The remainder of the paper is organized as follows: Section 2 presents the state of the art technologies for ensuringsensed data trustworthiness and sensors’ usage privacy in IoT applications. Section 3 provides an overview of theemployed approach. We present the details of our trusted sensing and privacy-aware reporting scheme for centralizedIoT applications in Section 4 and the secure node scheme for decentralized applications in Section 5. We evaluate bothschemes in Section 6 and discuss relevant security and privacy properties. Section 7 concludes the paper.3
PREPRINT - A
UGUST
28, 2018Approach Data Security Node Security Usage PrivacySaroiu and Wolman [2010] TPM in sensor yes no noDua et al. [2009] TPM in sensor yes no noWinkler and Rinner [2010] TPM in camera yes partial partialWinkler et al. [2014] TPM in image sensor yes no noPotkonjak et al. [2010] PPUF in sensor yes no noCornelius et al. [2008] Mix networks no no yesDe Cristofaro and Soriente [2011] Sensed data encryption yes no yesDimitriou et al. [2012] Token-based data access no no noThis work Trusted sensing yes yes yesThis work Secure node yes yes yesTable 1: Classification of the related work on securing sensor nodes and contributions of this work
This section discusses the relevant available work on sensed data trustworthiness and personal privacy protectionmechanisms in IoT scenarios.The threat surface area of the IoT applications necessitates the protection of data closer to the data source(s). Securingsensor nodes in the IoT scenario entails data security, node security and usage privacy Winkler and Rinner [2014].Data protection is typically implemented in firmware. Any modification in the underlying hardware can completelybypass the data protection. Therefore, node security is an essential requirement for data protection. Moreover, given theubiquitous and unprotected nature of IoT infrastructure (especially the sensor nodes), the hardware-, software-, anddata-protection mechanisms must consider the possibility of physical access to the nodes.Research on securing the sensed data and the sensing devices has been mainly focused on the integration of trustedplatform modules (TPM) and other secure cryptoprocessors into the sensors or host devices. The anonymous attestationfeature of TPM is used to attest to the integrity and authenticity of the sensed data closer to the data source. Furthermore,a TPM attests the system state before sensitive information is transmitted.Early work on securing sensor nodes Dua et al. [2009] was motivated by participatory sensing. The work made acase for trustworthiness in participatory sensing by content protection. Incorporation of TPM into mobile devices,participatory sensing application servers, and end user devices were proposed. The TPM attests the integrity of thesensed data in the mobile devices for submission to a participatory sensing application server. The proof of conceptcomprised an add-on circuit board, housing a TPM (TCG v1.2) chip, interfaced to a Nokia N800 phone. Overheadincurred due to the proposed solution amounted to kilobytes of memory (attestation code size), a latency of . s(attestation time), verification latency of . s.Saroiu and Wolman Saroiu and Wolman [2010] introduced the concept of trusted sensors and proposed the integrationof a TPM functionality into mobile device sensors to ensure integrity of the sensed data within the sensors. The workidentified the IoT applications that would benefit from the deployment of trusted sensors. These included participatorysensing, monitoring energy consumption, and documenting evidence of crime scenes. A high-level conceptual designof a trusted sensor, in which a TPM was incorporated into a sensor, was presented. However, the work did not provideany proof of the concept.Dietrich and Winter Dietrich and Winter [2009] explored software TPM implementations for embedded systems.Existing CPU extensions like ARM TrustZone were evaluated to implement a software TPM with security guaranteessimilar to those of dedicated hardware. Aaraj et al. Aaraj et al. [2008] also explored a software TPM solution. In orderto achieve a performance improvement, critical functions were implemented on reconfigurable hardware.Our earlier work, TrustCAM Winkler and Rinner [2010] and TrustEYE Winkler et al. [2014], exploited TPM chips forprotecting embedded camera nodes. TrustCAM used anonymous attestation and time-stamping features of the TPMto protect the integrity, authenticity and confidentiality of the image data on the host processor. To ensure image dataintegrity and authenticity, frame-groups were signed using a platform-bound key. Digital signing slowed down theframe rate only by . frames per second compared to plain streaming. TrustEYE aimed to protect the captured imagescloser to the sensor. A TPM chip was integrated into the sensing unit, which has exclusive access to the sensor’s data.Integrity, authenticity, confidentiality and freshness of the sensed data were protected at the sensing unit using 2048-bitsRSA keys. A cartooning filter was implemented to preserve the privacy of monitored individuals Erdélyi et al. [2014].At a resolution of × a frame-rate of frames per second was achieved.4 PREPRINT - A
UGUST
28, 2018Potkonjak et al. Potkonjak et al. [2010] proposed a different approach for the trusted flow of sensed data in remotesensing scenarios. The approach employed public physically unclonable functions (PPUFs). PPUFs are fundamentallydifferent from PUFs in several aspects: First, PPUFs are hardware security circuits which can be modeled by algorithmsof high complexity whereas PUFs cannot be modelled. Second, the security of a PPUF relies on the fact that the PPUFhardware output is many orders faster than its software counterpart (i.e., model) whereas the security of a PUF relies onthe unclonability of the PUF circuit. The major drawback of the PPUF-based approach lies in the fact that current PPUFdesigns involve complex circuits that require high measurement accuracy. This slows down the authentication processand therefore the solution is not scalable. Additionally, the solution targets applications where privacy is not a concern.Some recent research efforts have lead to successful identification of PUF behavior on sensors. Sensor PUF is an ideaintroduced by Rosenfeld et al. Rosenfeld et al. [2010] whereby the PUF response is determined by the applied challengeas well as the sensor reading. Cao et al. Cao et al. [2015] introduced a CMOS image sensor based weak PUF. The PUFresponse bits are generated by comparing the random fixed pattern noise in selected pixel pairs. Although PUFs arelightweight, hardware security primitives that can be used to offer a scalable solution, identification of PUF behavior onsensors is only a part of the solution.Early work on usage privacy of personal sensing devices was also motivated by participatory sensing. Anonysense Cor-nelius et al. [2008] is a participatory sensing model that uses a trusted authority to anonymize the sensed data. Insteadof submitting the sensed data directly to the application server the mobile devices submit the data to the anonymizingauthority. The authority collects the sensed data from the participating mobile devices, anonymizes it, and forwardsit to the application server. Mobile devices communicate with the authority via Mix network. The application serverassigns sensing tasks to the mobile devices using Tor anonymizing network. Anonysense offers k -anonymity, where k is given by the number of mobile devices contributing sensed data to the application server. Anonysense had a numberof limitations: First, observe that in order to guarantee k -anonymity, a Mix network may wait to receive k reports beforeforwarding them to the application server. This may significantly affect the service offered by the application. Second,anonymization is performed after the data leaves the smartphone, whereas previously bugs Wired have successfullyexploited the vulnerabilities in the software stack of the smartphone to leak users’ privacy, thereby rendering the entireanonymization process ineffective.PEPSI De Cristofaro and Soriente [2011], another participatory sensing framework, used identity based encryption forend-to-end encryption of sensed data reports. Smartphones register with a trusted registration authority and obtain IDscorresponding to the application they intend to participate in. The application server only receives encrypted reports andforwards them to the intended end-user by matching the tags. The solution is only suitable for decentralized applicationsas the server cannot process the encrypted reports.PEPPeR Dimitriou et al. [2012] proposed a protocol for privacy-aware access of the sensed data by the end users(sensed data consumers) in participatory sensing networks. An end user obtains tokens from the application serverwhich reveal nothing about either the identity or its desire to spend the token with a specific sensed data. The tokenvalidity, double-spending prevention are incorporated in the protocol using the classic cryptographic techniques.An overview of the discussed related work is summarized in Table 1. To summarize the previous work on sensed datatrustworthiness, a TPM-based approach incurs significant hardware overhead on a node which may not be an economicalsolution for resource constrained sensor nodes. Despite widespread deployment of TPMs in laptops, desktops, andservers for over a decade, TPMs have not yet found their way into resource-constrained embedded devices. Moreover,TPMs do not provide protection against physical attacks. Due to open nature of IoT applications, sensors might bephysically accessible to the attackers, which render TPM-based solutions ineffective in the given scenario. Protocolsbased on complex PPUF primitives are slow, have limited scalability and do not address the privacy protection. Inthis work, we identify PUF behavior on platforms that can serve as sensing platforms for a broad range of sensors.Furthermore, usage privacy of the sensors was not explicitly considered or addressed by any of the reviewed works.In the related work on sensors’ usage privacy, all proposed solutions are based on an online trusted authority. The onlinenature of the authority significantly increases the risk of keys compromise. Mix network based solutions are slow andmay not be ideal for real-time or latency critical applications. Solutions leveraging end-to-end encryption of the senseddata are suitable only for the decentralized applications.By leveraging lightweight cryptographic techniques we propose effective solutions for protecting sensed data, sensornodes and privacy of data producers, which are hooked into sensor hardware and are therefore harder to bypass. Wepresent protocols for privacy-aware reporting of sensed data in IoT applications for both centralized and decentralizedIoT applications. The solution does not uses an offline trusted authority which greatly reduces the risk of compromisingkeys. The protection of sensed data and privacy locally on the sensing devices, further reduces the risk of collusion andSybil attacks. 5 PREPRINT - A
UGUST
28, 2018
IoT applications vary significantly in their infrastructure (e.g., cloud vs. edge), sensed data collection mechanisms(e.g., raw data vs. processed information collection), data processing requirements (e.g., processing at data sourcevs. processing at server), data acceptance criteria and the services. A sensor-centric security solution to ensure senseddata trustworthiness and sensors’ usage privacy depends on whether the processing of the sensed data takes place on thesensors or the server side. Therefore, with respect to data processing requirements, we categorize the IoT applicationsinto two classes and propose two schemes tailored for the two classes of applications:The first class of applications is attributed to the collection of raw data from the sensing devices. Processing of thedata takes place at a central server. The sensors are (embedded or externally) connected to a host device that reads thesensors and relay the sensed data to the server. Participatory sensing applications are a common example of centralizedapplications.We propose trusted sensing and privacy-aware reporting for the the centralized applications to ensure (i) trustworthinessof sensed data (i.e., data with integrity, authenticity and freshness guarantees) and (ii) usage privacy (i.e., anonymity ofthe sensing devices and unlinkability of multiple submissions from a device). The scheme works in two stages.First, the trustworthiness of sensed data is ensured by trusted sensors . Each trusted sensor extracts its unique fingerprintfrom the sensor hardware using on-chip physically unclonable functions (PUF) and attests to integrity and authenticityof each sensed reading by signing it using an identity-based signature scheme. The signature scheme uses the sensor-bound, unique fingerprint as the signing key. Second, to report the signed readings from the trusted sensors in a privacypreserving manner, all privacy leaking information (e.g., signature with a sensor bound unique key) is anonymized usinga non-interactive witness indistinguishable proof system ( P NIWI ) Groth and Sahai [2012]. Due to resource constraintson the sensors, we offload the privacy protection mechanism to the host processor on the host CPU. Since the host OS isassumed to be untrusted, we leverage a virtualization approach Chen et al. [2008], Brakensiek et al. [2008] where theuser’s software environment runs as a guest virtual machine. The root virtual machine, inaccessible to the user, hasexclusive access to the trusted sensors and runs the privacy protection mechanism on the sensors’ output.The second class of applications leverages the resources on the host devices for processing the sensed data locally on thesensor nodes. Semantic information extracted from the data is delivered to the server. Visual monitoring applicationsare prominent examples that fall under this class of applications.With the trusted sensing and privacy-aware reporting approach, once sensed data is signed at the sensor, any processingof the data at host devices invalidates the security guarantees, which render the trusted sensing approach unsuitablefor the given scenario. Instead, a holistic security solution encompassing the sensor and host device, called securenode , is presented. The secure node approach addresses all layers of sensor node (sensor and host) stack including theapplications, middle-ware, OS, and the hardware.We present the details of trusted sensing and privacy-aware reporting and secure node schemes for centralized anddecentralized IoT applications in Sections 4 and 5, respectively.
This section presents the trusted sensing and privacy-aware reporting approach for centralized applications. To illustrateour approach, we consider the participatory sensing (PS) scenario of Fig. 2. The individuals interested in contributingsensed data to a PS application register their mobile devices with the PS server (also known as application server).During the registration, a client software is downloaded and installed on the mobile device. In order to contribute senseddata to the PS application, the client software running on the mobile device triggers a system call to the OS to read outthe required sensors and return the readings to the client. The client composes them in form of a report and relays themto the server. The mobile devices may use a WiFi or a cellular network Internet service to submit the sensed reports.The PS server collects reports from the contributing mobile devices, archives it for short- or long-term, and performsprocessing on the collected data. Processing includes filtering the high-quality data, extracting information from thecollected data and presenting the information in a format required by the end user. This information is provided to theend users as service.The trusted sensing and privacy-aware reporting aims for two security objectives: (i) trustworthiness of sensed data and(ii) anonymity of sensing devices and unlinkability of multiple submissions from a device.The trustworthiness of sensed data is ensured by trusted sensors that comprise two key components: (i) a PUF frameworkextracts sensor fingerprint using an on-chip PUF and binds a unique key to the sensor hardware using the fingerprint and6
PREPRINT - A
UGUST
28, 2018Figure 2: A high level infrastructure of participatory sensing (PS) applications(ii) sensed data attestation uses an Identity-based Signature scheme to sign every sensor reading using the sensor-boundkey (depicted in Fig. 3). The scheme uses a trusted authority who securely binds a unique key to each sensor hardware.To report the signed readings from trusted sensors in a privacy preserving manner we adopt a Non-interactive WitnessIndistinguishable Proof System ( P NIWI ) Groth and Sahai [2012]. Due to resource constraints on the sensors, we offloadthe privacy protection mechanism to host processor on user device. Since the host OS is assumed to be untrusted, weleverage virtualization approach Chen et al. [2008], Brakensiek et al. [2008] where the user’s software environmentruns as a guest virtual machine. The root virtual machine is inaccessible to the user.To ensure anonymity and unlinkability of multiple submissions by a user device, uniquely identifying information in atrusted sensor’s output such as a sensor’s signature on the reading using the unique sensor-bound key cannot be revealedto the server since it can uniquely identify the sensor, thereby the user device and the user. Instead, the mobile devicecomputes and reports proof of knowledge of the uniquely identifying information. This is done using P NIWI .Given a mobile device embedded with the trusted sensors, the root virtual machine executes the prover algorithm of the P NIWI as follows: (i) read the trusted sensors, (ii) commit to the witness (i.e., uniquely identifying information that wewant to anonymize such as sensor identity, the signature, and the certificate) and (iii) generate the proofs of knowledgeof the sensor identity,the signature, and the certificate. These commitments and proofs are sent to the server along withthe sensed readings. The server executes the verifier algorithm of the P NIWI using received readings, commitments, andthe proofs as arguments and verifies that the prover in fact possesses a valid signature-certificate pair for each receivedreading, thereby verifying the integrity and authenticity of the readings.The witness-indistinguishability of the P NIWI proof system implies that the commitments and proofs do not reveal(to the server) the witness used to construct the commitments and the proofs. Anonymity of the prover is given bythe number of possible witnesses. Given N mobile devices equipped with trusted sensors and submitting readingsto the PS server, anonymity of each user is given by N . Our scheme aggregates all signatures and certificates in thetrusted sensors’ output into a single signature and then generates the proof of knowledge of the aggregate signature, asillustrated in Fig. 3. This considerably reduces the communication overhead incurred on the user device.7 PREPRINT - A
UGUST
28, 2018Figure 3: Trusted sensing and privacy-aware reporting scheme: Trusted sensors provide integrity and authenticityguarantees on sensed data. Sensor readings are aggregated and anonymized on the root virtual machine (VM) of thehost device. The modules of the proposed security scheme are marked in greenNext, we present the trusted sensing and privacy-aware reporting components of the security scheme in Sections 4.1and 4.2, respectively.
Trusted sensing is accomplished by the trusted sensors in two steps: fingerprint extraction and sensed data attestation.The former refers to extraction of unique, non-transferable fingerprints from the sensor hardware by a legitimateauthority in a secure environment. The later uses a digital signature scheme to attest the integrity and authenticity ofsensor readings. The sensor fingerprint serves as the signing key for the sensed data attestation. The receiver (e.g., PSserver) can verify the integrity and authenticity of each reading by signature verification. The proposed trusted sensoruses a
PUF framework to extract the sensor fingerprint and binds it to the sensor hardware. An
Identity Based SignatureScheme (PUF-based Cert-IBS) is used for the sensed data attestation which ensures non-repudiation of the data.
Physically unclonable functions (PUF) are special lightweight circuits that use the CMOS manufacturing processvariations to generate the fingerprint of the underlying hardware. Typical attributes of a PUF include randomness,uniqueness, physical unclonability, and reliability. A PUF circuit provides a challenge-response mapping that is based onthe uncontrollable variations in the physical structure of the integrated circuit (IC) introduced during the manufacturingprocess. These variations are random and unique for each instance which makes any PUF-enabled electronic hardwareuniquely identifiable (uniqueness). Moreover, the chip manufacturer is not able to control or forge these variations(physical unclonability). Reliability implies that a PUF should be able to reproduce the same challenge-responsepairs under a range of environmental and operating conditions. However, in practice, multiple responses from a PUFinstance obtained under different environmental conditions (e.g., temperature) or operating conditions (e.g., voltagesupply) slightly differ from one another. These variations are referred to as PUF noise or error-rate and are measured asintra-Hamming distance ( HD intra ). Uniqueness of a PUF mapping is measured in terms of inter-Hamming distance( HD inter ) which is a measure of how different two responses from two PUF instances are. Randomness of a PUFresponse is measured in terms of Hamming weight ( HW ) of the response. Ideally, maximum HD intra ≈ , average HD inter ≈ and average HW ≈ . 8 PREPRINT - A
UGUST
28, 2018In order to extract an uniformly distributed random and perfectly reproducible fingerprint from the noisy and biasedPUF response, helper data algorithms (HDAs) are used. Our scheme requires the flexibility of masking an externallygenerated cryptographic key with the device fingerprint; therefore we use the HDA by Tuyls Tuyls and Batina [2006].The PUF framework is comprised of two modules: the PUF and the HDA and works in two phases: key binding andkey extraction.1. Key Binding: W ← Gen ( r, k ) It is a one-time protocol carried out by a legitimate authority on the PUF in a secure environment to generatehelper data W . A challenge c is applied to the PUF and a response r is obtained. The authority then choosesa random key k ∈ { , } k and calculates the corresponding helper-data as W ← r ⊕ C k , where C k is thenearest code-word chosen from the error-correcting code C , with k − code-words. W is integrity protectedpublic information.2. Key Extraction: k ← Rep ( r (cid:48) , W ) It is performed every time the key extraction from the PUF is desired. The PUF is subjected to the samechallenge c and a noisy response r (cid:48) is obtained. The code-word is then calculated as C k (cid:48) ← r (cid:48) ⊕ W . If r (cid:48) corresponds to the same challenge c applied to the same PUF, k is obtained after decoding C k (cid:48) using W otherwise an invalid code-word is obtained i.e., k ← Decoding ( C k (cid:48) ) , if Hamming distance ( C k , C k (cid:48) ) ≤ t ,where t is error-correction capacity of C .This PUF framework offers the following key advantages: (i) it binds a unique key with a PUF-enabled hardware, (ii) itprovides secure storage of the key since the key is derived from device properties during start-up and (iii) it offers morecost-effective secure key storage than a secure memory alternative. This section explains our Identity based Signature scheme (PUF-based Cert-IBS) that is based on the framework Bellareet al. [2009] to construct certificate-based identity-based signature scheme (Cert-IBS) from a standard signature (SS)scheme. PUF-based Cert-IBS ensures integrity and authenticity of sensed data in our trusted sensing and privacy-awarereporting and secure node schemes. It uses the PUF framework of Section 4.1.1 to bind the security key with the sensorfingerprint extracted using PUF. A typical SS comprises three algorithms: key generation ( K ) , signing ( Sign ) andverification ( V er ) . PUF-based Cert-IBS uses a key generation authority. To setup PUF-based Cert-IBS, the authoritygenerates a master key pair ( msk, mpk ) using K .We assign an identity I and a PUF instance P U F to each sensor. The identity can be any unique physical identifier ofthe sensor such as serial number, EPC or a unique bit string written to one-time programmable memory of the sensor.We denote a sensor with identity I and P U F by SEN(
I, P U F ).1.
Setup.
The trusted authority runs the K of SS to generate the master key pair: ( mpk, msk ) ← K (1 k ) Enrollment.
During the enrollment phase, the authority generates a unique signing key pair ( sk, pk ) usingthe key generation algorithm K of SS and binds sk with the on-chip P U F using the key binding algorithmof the PUF framework, i.e., W sk ← Gen ( r, sk ) where r is the P U F response to challenge c selected bythe authority and W sk is the helper data corresponding to sk . Further, the authority issues a certificate onthe public half of the signing key given by cert ← Sign msk ( pk, I ) . W sk and cert are stored in the sensor’snon-volatile memory.3. Sensed Data Attestation.
Sensed data attestation is performed every time the sensor SEN(
I, P U F ) outputs anew reading. The private key required for signing is reconstructed at the power-up using the key extractionphase of the PUF framework i.e., sk ← Rep ( r (cid:48) , W sk ) . PUF-based Cert-IBS signature of SEN( I, P U F ) onsensor reading M is given by ( M, I, pk, σ, cert ) , where σ ← Sign sk ( M ) . PUF-based Cert-IBS verificationis successful if V er pk ( M, σ ) = 1 and
V er mpk (( I, pk ) , cert ) = 1 . Successful Cert-IBS verification ensuresthat reading M is signed by SEN( I, P U F ) with its platform-bound private key, assigned and bound toSEN(
I, P U F ) by the legitimate authority.Given that SS is a uf-cma secure standard signature scheme, theorem 3.5 of Bellare et al. [2009] proves that thecorresponding PUF - based Cert - IBS as per construction of Section 4.1.2 is a uf-cma secure IBS scheme.
Privacy-aware reporting of sensed data entails anonymity of user devices and unlinkability of multiple submissionsfrom a user device. Given a user device incorporated with the trusted sensors, each element of tuple ( I, pk, σ, cert ) in9 PREPRINT - A
UGUST
28, 2018a trusted sensor’s output uniquely identifies the sensor and therefore cannot be revealed to the data center. For eachtrusted sensor output ( M, I, pk, σ, cert ) , the user device computes a proof of knowledge of ( I, pk, σ, cert ) using thenon-interactive witness indistinguishable proof system ( P NIWI ) by Groth and Sahai Groth and Sahai [2012]. Themobile device (prover) then sends the proof instead of ( I, pk, σ M , cert ) along with the sensor reading M to the server(verifier) as depicted in Fig. 3. The server verifies the proof. Successful verification ensures that the mobile deviceknows a witness ( I, pk, σ, cert ) such that PUF-based Cert-IBS verification equations holds true for the received data M i.e., V er pk ( M, σ ) =
V er mpk (( I, pk ) , cert ) = 1 .Given N trusted sensors submitting sensed data to a server, witness indistinguishability of the proof implies that thedata center cannot distinguish which witness { ( I i , σ i , cert i ) } Ni =1 (i.e., trusted sensor) was used to construct the proof.Therefore, every trusted sensor is N -anonymous with respect to the server. Unlinkability of multiple submissions bythe same sensor follows from witness indistinguishability. For privacy-aware reporting of sensed data, our scheme uses pairings-based cryptography, therefore we sketch somebasics of pairings: Let G , G and G T be cyclic groups of the same prime order and g and g are the generators of G and G respectively. A pairing is map e : G × G → G T that is (i) bilinear, i.e., for all u ∈ G , v ∈ G and a , b ∈ Z , e ( u a , v b ) = e ( u, v ) ab , (ii) e ( g , g ) generates G T and (iii) e is efficiently computable. The setting where G = G = G and g = g = g is called symmetric pairing whereas if G (cid:54) = G and g (cid:54) = g , the pairing iscalled asymmetric. For simplicity, we explain our scheme for a symmetric setting. For practical implementation, anasymmetric setting is recommended (cp. Section 6.3). ( P NIWI ) A proof system allows a prover who possesses some witness ω to convince a verifier that a certain statement χ ∈ L istrue, where L is some language, and ω is a witness that attests to this fact. In witness indistinguishable proof systems,the interaction between the prover and the verifier does not reveal information about the witness, even if the verifierbehaves maliciously. Furthermore, it is unfeasible for an adversary to decide which of the possible witnesses is used bythe prover. In a non-interactive proof system, the prover simply sends the verifier a single message after which the latterverifies correctness of the proof without any further interaction with the prover.Groth and Sahai introduced a non-interactive witness indistinguishable proof system ( P NIWI ) for languages involvingthe satisfiability of equations over bilinear groups, in the common reference string model Groth and Sahai [2012]. Themain idea underlying P NIWI is as follows: Given groups A , A , A T with a bilinear map, P NIWI maps the elements in A , A , A T into B , B , B T , also equipped with a bilinear map, by using a commitment scheme. The latter groups arelarger thereby allowing to hide the elements of A , A , A T .Given the equation(s) that we intend to prove, we replace the variables (witness) in the equation(s) with commitmentsto those variables. Since the commitments are hiding, the equations will no longer be valid. However, we can extractout the additional terms introduced by the randomness of the commitments and provide these terms in the proof to theverifier, who can verify the validity of the equations. Does providing these terms destroy witness indistinguishability?Since there are multiple additional terms introduced by substituting the commitments, the algebraic environment allowsus to randomize the terms such that their distribution is uniform over all possible terms satisfying the equations.By definition, P NIWI is a tuple of four probabilistic polynomial time algorithms ( K NI , P NI , V NI , X NI ) , i.e., keygenerator, prover, verifier, and extractor, respectively. The key generator, K NI , takes the bilinear group description A , A , A T as input and outputs a common reference string crs and an extraction key xk . crs comprises the targetgroups description B , B , B T and rules to compute commitments. Given a set of equations (that the prover wants toprove), the prover, P NI , takes crs and a witness ω as input and outputs a proof π for each equation. The verifier, V NI ,given crs , the set of equations and π outputs if the proof is valid and otherwise. Finally, the extractor, X NI , on avalid proof π may extract ω using the extraction key xk .The privacy-aware trusted sensing scheme adopts P NIWI for privacy-aware reporting of sensed data from the trusted sen-sors. During the setup, the TA runs the key generator algorithm K NI , which takes the group description ( G, G T , g, e, p) as input and outputs the csr that comprises eight group elements, i.e., ∈ G and an extraction key xk . The csr ispublished for the participating mobile devices and the application servers where as xk is kept secret by the TA since theextraction of the witnesses is not required in the proposed privacy-aware trusted sensing scheme. The scheme, however,can be extended with dispute resolution and revocation mechanisms using the extractor algorithm and the xk . A mobiledevice runs P NI every time it has to submit sensed data using the trusted sensors to an application server.10 PREPRINT - A
UGUST
28, 2018Without a privacy-protection mechanism, a mobile device submits the trusted sensor reading ( M, I, pk, σ, cert ) asis to the server who verifies the signature and the certificate using the PUF-based Cert-IBS verification equations, V er mpk ( I, cert ) ? = 1 and V er pk ( M, σ ) ? = 1 . In this case, the server requires ( I, pk, σ, cert ) as input to the PUF-basedCert-IBS verification algorithm. Each element of this tuple uniquely identifies the trusted sensor and the mobiledevice. With the P NIWI , the root virtual machine running on the mobile device commits to each element of the witness ( I, pk, σ, cert ) using the csr , d ( I ) , c ( pk ) , c ( σ ) , and c ( cert ) , where c ( . ) denotes a commitment to an element ingroup G whereas d ( . ) denotes a commitment to an element in Z p . The pk, σ, and cert ∈ G each. Although, I isonly assumed to be a unique, random value, one can use I mod p ( ∈ Z p ) instead. Each c ( . ) and d ( . ) is ∈ G . Itfurther computes two proofs π and π by replacing the witness with the commitments in the PUF-based Cert-IBSverification equations. Since we use the symmetric version of the BLS signature scheme Boneh et al. [2004] as the SS in PUF-based Cert-IBS, the PUF-based Cert-IBS verification equations, denoted as eq and eq , are given by Eqs. 1aand 1b. Each of the π and π ∈ G . The proofs and commitments are are then sent to the server along with the sensorreading M . The PS server runs the verification algorithm V NI to verify whether or not the proofs π and π and thecommitments satisfy the following equations e ( h M , pk ) = e ( σ, g ) (1a) e ( h c , mpk ) = e ( cert, g ) (1b)where H ( I, pk ) is denoted by h c and H ( M ) is denoted by h M .In PS applications a sensed data report comprises typically multiple sensors’ readings. For a report comprising Q sensors’ readings, the above process is performed Q times. The root virtual machine then provides the tuple { M i , d ( I i ) , c ( pk i ) , c ( σ i ) , c ( cert i ) , π i , π i } Qi =1 to the application client running in guest virtual machine on theuser device, which sends it to the PS server, who runs the verification algorithm V NI for each sensor reading. Formathematical details and security proofs of the P NIWI construction, the reader is referred to Groth and Sahai [2012].Theorem 17 in Groth and Sahai [2012] proves that P NIWI following the construction of Section 4.2.2 has perfectcompleteness, perfect soundness and composable witness indistinguishability for satisfiability of Eqs. 1a and 1bin a bilinear group G where DLIN problem is hard. Witness indistinguishability implies that the proofs and thecommitments do not reveal what values of the witness ( I, pk, σ, cert ) were used to generate the commitments and theproofs. Anonymity of the prover P NI with respect to V NI is given by the possible number of values a witnesses cantake. Given the PS scenario, anonymity of each trusted sensor with respect to the application server is given by the totalnumber of the trusted sensors contributing sensed data to the server.Assuming the DLIN assumption holds in G , the P NIWI for each sensor reading costs elements of group G , i.e., fourcommitments d ( I ) , c ( pk ) , c ( σ ) , c ( cert ) consisting of group elements each and two proofs π and π of groupelements each. A report comprised of Q sensors’ readings therefore incurs a communication overhead of Q elementsof G on the mobile device. The aggregation property of BLS signatures allows an aggregating party to combine multiple, say m , signaturesinto a single signature as ¯ σ ← Π mi σ i , where ¯ σ ∈ G , thereby reducing the total signatures’ size to /m . Foraggregate verification, given ¯ σ , the original messages M i , and public keys pk i , compute h M i ← H ( M i ) and acceptif e (¯ σ, g ) = Π mi e ( h M i , pk i ) . In our scheme, aggregation is done by the root virtual machine on the mobile device asfollows: In a PUF-based trusted sensor output σ i , i.e., Sign sk i ( M i ) , and cert i , i.e., Sign msk ( I i , pk i ) , are both BLSsignatures and can be aggregated. Furthermore, if an application requires every mobile device to submit multiple, say Q sensor readings, the aggregation is done as ¯ σ = Π Qi =1 σ i · cert i .A major reduction in communication overhead is achieved since P NIWI is applied on the aggregate of Q readingsinstead of individual ones. The simultaneous satisfiability of PUF-based Cert-IBS aggregate signature verification isgiven by Eq. 2. The root virtual machine sets the witness to ( { I i , pk i } Qi =1 , ¯ σ ) and equation we want to verify, denotedas eq , to Eq. 2. The prover, using P NI , commits to each element of the witness and generates a proof π by pluggingthe commitments to the eq . Successful verification using V NI at the server ensures that the prover (mobile device)possesses Q PUF-based Cert-IBS signatures on Q distinct readings { M i } Qi =1 such that: e (¯ σ, g ) = Π Qi =1 e ( h M i , pk i ) e ( h c i , mpk ) (2)The privacy-aware trusted sensing scheme is summarized in Table 2 and runs in three phases: setup, enrollment andtrusted sensing and privacy-aware reporting. The TA sets up the scheme by generating its master key pair. The bilinear11 PREPRINT - A
UGUST
28, 2018
Setup : TATA: H, Σ ← G (1 k ) where Σ = ( G, G T , g, e, p) and H = collision resistant hash function TA: ( mpk, msk ) ← K (1 k ) TA: ( csr, xk ) ← K NI (Σ) Enrollment : SEN ( I, P UF ) ↔ TA ( mpk, msk ) TA: W ← Gen ( r, sk ) TA: cert ← ( pk, Sign msk ( pk, I )) Trusted Sensing and Privacy - Aware Reporting : SEN ( I, P UF, W, cert ) ↔ MOB( Σ , H, csr ) ↔ PS SERVER( Σ , H, csr )SEN ( I, P UF, W, cert ) : ( M, I, σ, cert ) ← SDA ( M ) MOB (Root VM): ( { M i , I i , pk i } Qi =1 , ¯ σ ) ← AGG ( { M i , I i , σ i , cert i } Qi =1 ) , for Q sensor readingsMOB (Root VM): ( { d ( I i ) , c ( pk i ) } Qi =1 , c (¯ σ ) , π ) ← P NI (Σ , csr, ( { I i , pk i } Qi =1 , ¯ σ ) , eq ) PS SERVER: ? = V NI (Σ , crs, { d ( I i ) , c ( pk i ) , H ( M i ) } Qi =1 , c (¯ σ ) , π ) Notations : TA=Trusted Authority, MOB= Mobile device,
SDA = Sensed data attestation,
AGG = Signature aggregation
Table 2: The Privacy-aware trusted sensing scheme is executed in three phases: Setup, enrollment and trusted sensingand privacy-aware reporting. The TA sets up the scheme by generating its master key pair. The bilinear group descriptionand csr are also generated and published for the PS entities. During enrollment, each sensor is enrolled with theTA using PUF-based Cert IBS enrollment. During the trusted sensing and privacy-aware reporting phase, the trustedsensors output readings with integrity and authenticity guarantees, which are anonymized by the root VM on the hostmobile device and sent to the PS server. The PS server verifies the integrity and authenticity of the readings in aprivacy-preserving manner using the P NIWI verification algorithm V NI group description and csr are also generated and published for the PS entities during the setup. Enrollment of theprivacy-aware trusted sensing scheme is in fact the enrollment of the PUF-based Cert IBS enrollment, performed onlyonce in a secure and trusted environment. The trusted sensing and privacy-aware reporting is performed every timea mobile device contributes sensed data to a PS application: The trusted sensors’ output readings with integrity andauthenticity guarantees, which are anonymized by the root VM on the host mobile device and sent to the PS server.The PS server verifies the integrity and authenticity of the readings in a privacy-preserving manner using the P NIWI verification algorithm, V NI . P NIWI of Q aggregated readings for satisfiability of eq costs (6 Q + 12) elements of G compared to Q elements of G without aggregation. Anonymity of the mobile device with respect to the PS server is given by the total number ofmobile devices reporting the PUF-based trusted sensors’ readings to the PS server. This section presents the secure node approach for decentralized IoT applications. Visual sensor networks (VSNs)are becoming increasingly popular in IoT applications that range from surveillance of critical public spaces for lawand order maintenance and public safety to private space monitoring such as smart homes, assisted/enhanced living,child monitoring and home security Winkler and Rinner [2014]. Continuous transmission of visual data requireshigh bandwidth and memory which is often unfeasible. Therefore, visual monitoring applications typically requireprocessing of visual data locally on camera nodes. A major limitation of the trusted sensing and privacy aware reportingapproach of Section 4 is that once data is signed within the sensor, any legitimate modification (processing, compressionetc.) of the data at host processor invalidates the security guarantees. With secure node approach, we overcome theselimitations by adopting a holistic security solution for the node addressing all layers of a camera stack including theapplications, middle-ware, OS, and the hardware.To illustrate our approach, we consider a visual monitoring for assisted living scenario of Fig. 4 where one or morecameras monitor the space for events of interest such as fall detection or no movement for long duration. To limit theamount of data transmitted by the camera device, archiving is triggered upon event detection. The event can be triggeredinternally (e.g., by on board analytics) or externally (e.g., by auxiliary sensors or a request from the caretaker). Weassume that integrity and authenticity of an external source is verified before triggering an event. Upon event detection,an alert message is sent to a caretaker and the video data capturing the event is uploaded to a storage server. Whenthe upload has been completed, a push notification is sent by the server to the caretaker who then downloads the data,analysis it and formulates a response. 12
PREPRINT - A
UGUST
28, 2018Figure 4: A high level infrastructure of private space monitoring applications depicting security and privacy requirementsIn order to keep the cost of the camera low, we do not assume availability of permanent storage of videos on the cameranode. A public cloud storage server is leveraged for short or long-term video archiving. To limit the amount of datatransmitted by the camera device, archiving is triggered upon event detection. The event can be triggered internally(e.g., by onboard analytics) or externally (e.g., by auxiliary sensors or a request from the end user). We assume thatintegrity and authenticity of an external source is verified before triggering an event.Data security, node security and personal privacy of the monitored individuals are the integral parts of the sensingdevice. Data security includes integrity, authenticity, confidentiality, and freshness of the visual data and the metadata.Data security is ensured on the sensing device before it is delivered to the server. Since visual data contains identitiesand behaviors of observed individuals, data confidentiality and access authorization are essential security requirementsfor personal privacy protection in visual monitoring applications. These security guarantees are valid throughout theentire lifetime of the data.Confidentiality is ensured by encrypting each video frame using AES128 encryption. Integrity and authenticity areensured by signing the hash-chain of encrypted frames using the PUF-based Cert-IBS scheme of Section 4.1.2. PUF-based Cert-IBS and AES128 algorithms use platform-bound security keys. PUF framework of Section 4.1.1 binds thesigning and encryption keys to the camera hardware using on-chip PUF that serves as secure key storage. On eventdetection, encrypted-hashed-signed footage is uploaded to a public storage server at the edge or cloud tier and an alertmessage is sent to the end-user who can then download the archived footage from the server on demand. Integrity,authenticity and freshness of data is ensured by verifying PUF-based Cert-IBS signatures. Only the authorized end-user(i.e., having access to the decryption key) can decrypt the frames. Since data security is implemented at applicationlevel, in order to ensure effective security guarantees, underlying software and hardware stack of the camera node needsto be protected as well. Node security requirements include integrity, authenticity and unclonability of camera firmware,resistance against hardware tampering and side-channel attacks.
In order to bind the cryptographic keys with the camera platform and limit data access to only legitimate caretakers, thescheme requires two steps namely enrollment and key exchange to be performed before the camera can be deployed for monitoring . Enrollment, key exchange and monitoring phases of the scheme are depicted in Fig. 5. The secure nodeapproach uses a trusted authority (TA). Enrollment is performed by the TA in a secure and trusted environment. Duringthis step, the TA extracts a unique fingerprint of the camera hardware using the PUF framework of Section 4.1.1 andbinds signing and encryption keys with the hardware. During the key exchange, the caretaker securely transfers thesignature verification and decryption keys from the camera device to her monitoring device. Afterwards, the camera isdeployed for monitoring. To setup our scheme, the TA generates a master key pair ( mpk, msk ) ← K (1 k ) . We assigneach camera a unique identity I and a PUF instance. We denote a camera with identity I and on-chip PUF instance P U F as CAM(
I, P U F ) and the trusted authority with master key pair as TA( msk, mpk ).1.
Enrollment. PREPRINT - A
UGUST
28, 2018
PUF M O N I T O R I N G Storage Sever
Event-triggered data upload On-demand data deliveryOn-board mechanisms for data-, node-,and personal privacy protection
Camera
Verification of data integrity, authenticity,confidentiality and freshness done usingkeys obtained during key exchange
Mobile E NR O LL M E N T PUF Camera fingerprint Cryptographic Keys
Trusted authority (TA) extracts camera fingerprint by incorporating PUF on camera.Signing and encryption keys are extracted from noisy fingerprint using PUF framework End-user transfers verification and decryption keysfrom the camera to Mobile device locally
Camera k E sk cert K EY EX CHAN G E Camera Mobile pk, k E , cert Figure 5: The enrollment, key exchange and monitoring phases of the secure node scheme for visual monitoring forassisted living. The enrollment is performed once in a secure and trusted environment whereby the platform-boundunique cryptographic keys are created. During key exchange, the end user shares verification keys with caretaker’smonitoring device using a local interface. Thereafter, the camera is deployed for monitoring14
PREPRINT - A
UGUST
28, 2018During enrollment, TA( msk , mpk ) binds a signing key pair ( sk , pk ) and an AES-encryption key ( k E ) tothe camera node CAM( I , P U F ) using the camera fingerprint. The CAM presents TA with its identity I as a request for enrollment. The TA picks two random challenges ( c , c ) and feeds them to the P U F onthe camera. The responses from the
P U F ( r , r ) are returned to the TA, who then binds a signing andan encryption key with the CAM as follows: First, the TA generates a signing key pair ( sk , pk ) using thekey generation algorithm of PUF-based Cert IBS. Using the key-binding algorithm of the PUF framework,it binds the private-half of the key pair to the P U F using r , i.e., W ← Gen ( r , sk ) . Furthermore, theTA issues a certificate consisting of its signature on the CAM’s identity and public half of the key pair, i.e., cert ← Sign msk ( I, pk ) . Second, the TA generates a unique, random encryption key k E and binds it to the P U F on the CAM using r , i.e., W ← Gen ( r , k E ) . The tuple ( W , W , cert ) is stored in non-volatilememory on camera.2. Key Exchange.
Key exchange is performed by the caretaker before deploying the camera for monitoring.During this step, the caretaker transfers the signature verification key pk and the decryption key k E fromthe camera device to her monitoring device via a local interface such as NFC. Since the transfer is done in aprivate space using a local connection, it is assumed that k E is not leaked to a third party. Securing the keyson mobile devices with vulnerable software stack is out of scope of this work. However, well establishedtechniques such as virtualization Chen et al. [2008] (isolates applications requiring trusted infrastructure) andsecure vault can be leveraged for this purpose.3. Monitoring.
Once the camera is deployed for monitoring, on every power-up, the signing and encryptionkeys are generated from noisy
P U F responses following the key extraction phase of the PUF framework,i.e., sk ← Rep ( r (cid:48) , W ) and k E ← Rep ( r (cid:48) , W ) . In case of an event of interest, the video footage capturingthe event is transferred to the caretaker. Let N be the total number of frames comprising the footage. Thevalue of N can either be a fixed or variable number depending on type of the event. Upon the occurrenceof an event, each video frame is encrypted using the AES128 algorithm to ensure confidentiality, i.e., C i ← Enc k E ( f rame [ i ]) | i = 1 ...N . The non-repudiation of the data is ensured by MAC-then-Sign technique. First,each encrypted frame is hashed using the HMAC algorithm to ensure integrity h i ← HM AC ( C i ) | i = 1 ...N .This is followed by signing the entire hash-chain of all encrypted frames using the PUF-based Cert-IBS schemegiven by σ ← Sign sk ( h (cid:102) h (cid:102) · · · (cid:102) h N (cid:102) τ ) where τ is the timestamp given by SHA I (cid:107) event _ count ) .Signing the entire hash-chain together preserves the frame order. The timestamp is included in the signature toensure freshness of data and thwart replay attacks. The camera then uploads the encrypted-then-MACed-then-signed footage { C , C , . . . , C N , τ, σ } to the storage server. An alert message notifies the caretaker about theevent and the completion of the upload. The caretaker can then download the footage on-demand and checkintegrity, authenticity and freshness by verifying the Cert-IBS signature, i.e., ? = V er mpk ( cert, ( I, pk )) and ? = V er pk ( σ, ( h (cid:102) h (cid:102) · · · (cid:102) h N (cid:102) τ )) . Upon successful verification, the caretaker uses the decryption key todecrypt the frames to obtain footage in raw format, f rame [ i ] ← Dec k E ( C i ) | i = 1 ...N . The key idea underlying the secure camera architecture is to leverage an on-chip PUF to extract the node’s fingerprintfrom the hardware, which serves as basis for on-board data-, node-, and personal privacy protection. Security is rooted inthe system hardware making it an intrinsic element of the device and therefore harder to bypass. Video data processingand protection is done inside the SoC and the data leaves the chip with integrity, authenticity, confidentiality, accessauthorization, and freshness guarantees.The choice of the processing platform is a critical decision in every vision systems design. CPUs perform operationsin sequence whereas FPGAs are massively parallel in nature. Typically, FPGA performs vision processing order ofmagnitude faster than CPUs. However, an FPGA consumes more power and has higher programming complexity ascompared to a CPU. An architecture featuring both an FPGA and a CPU presents the best of both worlds and oftenprovides a competitive advantage in terms of performance, cost, and power consumption Treece. The secure cameranode leverages a system-on-chip (SoC). This offers two advantages: First, the SoC provides a monolithic architecturethat allows to architect a security solution tightly integrated with the system logic. Second, the SoC comprises an FPGAand processor part which provides the flexibility to compose a security solution addressing all layers of the node stack.There exist two categories of operating system (OS), that are used for embedded devices: real-time OS (RTOS) andgeneral-purpose OS. An RTOS provides scheduling guarantees to ensure deterministic behaviour and timely responseevents and interrupts. However, it is inefficient at handling multiple tasks in parallel and lacks board (hardware)support. Embedded Linux and Android dominate the world of general-purpose OS for embedded systems. Android hasbeen widely successful as mobile OS due to its rich support for multimedia, graphics, user interface, and networking.Drawbacks of Android lie in its large memory footprint and extensive CPU resources consumption. Embedded Linux15
PREPRINT - A
UGUST
28, 2018 (a) Zynq7010 SoC and OV5642 image sensor based smart camera platform usedfor implementation of trusted image sensor and secure camera node approaches I m a g e S e n s o r C o r e Control Input F o r m a t A d a p t e r OV5642 Sensing Unit AR M A DDR3 SDRAM E n c r y p t - M A C F r a m e s S i g n F oo t a g e F P G A E v e n t D e t e c t i o n Keys Extraction
Ethernet
PUF J P E G C o m p r e ss i o n k E sk U p l o a d (b) Block diagram of SoC-based secure camera node that depicts core compo-nents of camera hardware (gray) and software tasks (white) performed by thesecomponents Gb Ethernet
Linux KernelTask 1 Task 2 Task 3 Task NSystem LibrariesApplication Framework
ARM Cortex A9 Processor
FPGA Ethernet
User Libraries (c) Hardware (dark gray) and software (light gray) stack of Zynq SoC-based cam-era prototype. Hardware comprises of 5MP OV5642 image sensor, Zynq7010SoC having FPGA and ARM Cortex A9 core. The software stack comprisesembedded Linux kernel, system and user libraries and application framework
Figure 6: Hardware platform, block diagram, and hardware/software stack of secure camera node16
PREPRINT - A
UGUST
28, 2018
Sensing Tasks
Read thesensorConfiguresensor Formatadaptation
Processing Tasks
Motiondetection noCompression Event?
Security Tasks
Encryptframe Signfootage k E sk Key Extraction from PUF
Communication Tasks
Upload viaEthernetyes doublebuffer640x480 Enc-MAC-Signed JPEG
Format doublebuffer
Figure 7: Application framework of the secure camera node comprises sensing, processing, security and communicationtasks. The sensing tasks read image data from the sensor and perform format adaptations. The processing tasks includethe application logic (e.g., event trigger based on motion detection). In case of an event detection, data is forwarded tosecurity tasks, where frames are encrypted-MACed-signed. Protected frames are forwarded to communication tasks foruploadshines when it comes to operating efficiency in terms of memory footprint, power, and computing performance. Thesecure camera node uses embedded Linux OS.The prototype shown in Fig. 6(a), realizes the secure camera node architecture. The block diagram of the node showingthe core modules of video data path and their mapping on hardware components is depicted in Fig. 6(b). The hardwareand software stack of the camera node are depicted in Fig. 6(c). The camera hardware is comprised of OV5642—a 5MP CMOS image sensor array, Zynq7010 SoC, 1 GB SDRAM, and a gigabit Ethernet interface. The SoC houses adual-core ARM Cortex A9 processor clocked at 666 MHz and FPGA fabric. The ARM-A9 processor runs EmbeddedLinux that hosts system libraries (OpenSSL, GMP, libjpeg etc.) and user libraries (pbc, motiondetection etc.) to beused by the applications. On top of the OS, a custom application framework is designed, which is responsible forproviding the intended (application specific) functionality to the device. The application framework for the securecamera architecture, given by Fig. 7, is divided into the four tasks: sensing, processing, security, and communication.Sensing tasks include reading the visual data from the image sensor and encoding it the desired format. The OV5642image sensor is configured to provide data in 640 ×
480 (resolution) 8-bit YUV422 (color-space) format. Processingtasks include the application specific logic; the prototype for private space monitoring performs video compressionand event detection by behavioral analysis of video data. Video compression is achieved using the JPEG compressionengine on the OV5642 sensing unit. The event is triggered if motion is greater than a predefined threshold. Motionof the monitored individual is computed using the three-frame differencing algorithm by Collins et al. Collins et al.[2000]. The image difference between frames at time t and t − and the difference between t and t − , is performedto determine regions of legitimate motion and to erase ghosting. If an event has been detected, the frames are forwardedto the security tasks.Security tasks entail data security and node security. Data security tasks secure the data on-camera as described inSection 5.1. After camera power-up, encryption and signing keys are extracted from the ring oscillator (RO) PUFimplemented using the reprogrammable fabric and are loaded into the cache. Each frame is encrypted using AES128and MACed using HMAC-SHA256 to ensure data confidentiality and integrity, respectively. MAC checksums of allframes in the footage are concatenated and signed using the PUF-based Cert-IBS scheme with BLS Boneh et al. [2001]as underlying standard signature scheme; this ensure authenticity and preserves the frames order.Freshness of data is ensured by including a timestamp τ before signing the checksums. For timestamp generation, thecamera uses an event counter that increments whenever an event is detected. Given that an event is detected by themotion detection algorithm and the event counter is incremented to event _ count , then the timestamp is calculated as τ = SHA256 ( I (cid:107) event _ count ) . A time-stamp value holds true only for a specific event event _ count detected bycamera device I . Following the event event _ count , the footage is timstamped with τ . The value of the timestampshould not repeat among legitimate footages from different events detected by the same camera or among footagesfrom different cameras. This simple check deters replay attacks. It is important to note that encryption and hashing isperformed on frames whereas time-stamping and signing is performed on the complete footage.Node security aims to secure the software and hardware stack of the sensor node; this is achieved by a secure boot ofthe SoC and the on-chip PUF. The Zynq7010 SoC provides secure boot functionality as part of its boot procedure thatverifies authenticity, integrity and unclonability of the camera’s software stack based on digital signatures, messageauthentication code (MAC), and encryption. 17 PREPRINT - A
UGUST
28, 2018
AppsLinux OSUBootBootROMCode FPGABitstreamFSBL
Figure 8: Chain of trust for secure boot of Zynq7010 SoCThe boot mechanism is CPU-driven. Other hardware components used in the boot process are the non-volatile memory(NVM), BootROM, on-chip memory (OCM), AES/HMAC module, JTAG, and DDR RAM. The software programsinvolved in the boot are the BootROM code, the first-stage bootloader (FSBL), U-Boot, the Linux kernel, and userapplications. Boot-chain of the camera device is depicted in Fig. 8. The foundation of secure boot is established byplacing the BootROM code in a mask ROM, a one-time programmable memory, which implies that the ROM contentscannot be modified. While creating the image for secure boot, each successive component of the boot-chain is signed(RSA), hashed (HMAC-SHA256), and encrypted (AES256). The boot up starts with the BootROM code loading theFSBL, and continues serially with the FSBL loading the FPGA bitstream and the software. During every secure boot up,the chain of trust is established by the successive verification of signature (authentication), MAC checksum (integrity)and decryption (confidentiality) of all software, i.e., FSBL, bitstream, u-boot, OS, and user applications.This procedure prevents an adversary from tampering with software or the FPGA bitstream file. Zynq SoC containshard IP cores for AES decryption and HMAC computation. As a result, the difference between the boot time of secureand regular boots is negligible Sanders [2013]. The hardware stack is protected by the on-chip PUF. Incorporating aPUF into a chip makes the chip tamper evident Maes [2012]. Since PUF behavior corresponds to the underlying siliconfabric, any tampering with the fabric modifies the PUF behavior, thereby modifying the camera fingerprint. This leadsto generation of incorrect signing and encryption keys thereby incorrect signature and cipher text, which is detected bythe verifier.Data with confidentiality, integrity, authenticity and freshness guarantees is then forwarded to communication module foruploading. The prototype merely demonstrates a proof of the concept and can be extended with wireless communicationcapabilities such as WiFi.
The section evaluates the trusted sensing and privacy-aware reporting and secure node approaches for IoT-based smartservices. The section is divided into five parts: Since both approaches utilize the PUF framework to generate and storesensor bound keys, we present the implementation results of the PUF framework in Section 6.1. The privacy-awaretrusted sensing scheme is evaluated in two parts: First, Section 6.2 presents the trusted image sensor prototype andevaluates the overhead on the sensor with respect to storage, latency and hardware. Second, Section 6.3 evaluates thecommunication overhead on the mobile device for privacy-aware reporting of the sensed data from trusted sensors.Section 6.4 evaluates the storage, latency, hardware, and communication overhead on the secure camera node (theprototype presented in Section 5.2) incurred due to secure node approach. Section 6.5 discusses security properties andlimitations of both schemes.The experimentation setup used for the evaluation of the privacy-aware trusted sensing and the secure node approachesis as follows: In order to verify the feasibility of a PUF-based approach for sensors, we implemented the PUF frameworkon three sensing platforms of different complexities, namely (i) Atmel ATMEGA328P, a lightweight 8-bit MCU runningat 8 MHz, (ii) ARM Cortex M4, a 32-bit MCU running at 168 MHz, and (iii) Xilinx Zynq7010 SoC with FPGA and a32-bit dual core ARM Cortex A9 processor core running at 666 MHz. For objective comparison of the privacy-awaretrusted sensing and the secure node approaches, we prototyped a trusted image sensor and a secure camera node usingthe same platform as shown in Fig. 6(a). The platform comprises a 5MP OV5642 image sensor module and MicroZedboard, which houses Zynq7010 SoC clocked at 666MHz, 1 GB external RAM for frame buffering and a gigabit Ethernetinterface to upload video footage. A custom board (mounted below the Micozed board) was designed to interface theimage sensor with the MicroZed board and regulate power to both the modules.
Various PUF sources are inherent to a typical sensor including SRAM PUF, RO PUF, and sensor-specific PUFs Caoet al. [2015], Rosenfeld et al. [2010], Rajendran et al. [2016]. Since we target a broad range of sensors, we seek toidentify PUF sources that are commonly available on most sensors such as SRAM and RO PUFs.18
PREPRINT - A
UGUST
28, 2018A PUF is characterized by three quality parameters, i.e., randomness, reliability, and uniqueness. Hamming weight( HW ), the indicator of PUF randomness, measures the deviation of a PUF output from uniform distribution. For an n bit response r obtained from a chip U , HW is given by: HW ( r ) = 1 n n (cid:88) i =1 b i ·
100 % (3)where b i is the i th binary bit in an n bit PUF response. For a uniformly distributed PUF response, HW should be
50 % .The change in PUF response over varying environmental and operating conditions depicts the (lack of) reliability. Thechange is referred to as PUF error-rate or noise and is measured in terms of the inter-Hamming distance. An n bitreference response ( r ref ) is extracted from the chip U at the room temperature and standard operating conditions.Multiple responses from the same PUF are obtained under different environmental and operating conditions (e.g.varying temperature or supply voltage) and are denoted by r i . A number of samples of r i are taken for each combinationof the environmental and operating conditions. The PUF error-rate is measured as the average intra-Hamming distance( HD intra ) over N samples obtained under different environmental and operating conditions. For the chip U , it isdefined as: HD intra ( r ref , r , r , · · · , r N − ) = 1 N N (cid:88) i =1 HD intra ( r ref , r i ) n ·
100 % (4)The uniqueness property of a PUF measures how unique are the signatures generated from different chips using thesame PUF circuit. The average inter-Hamming distance ( HD inter ) of the PUF responses is a commonly used measureof uniqueness. The average HD inter for a group of M chips is defined as the average of all possible pair-wise HD inter among M chips: HD inter ( U , U , · · · , U M ) = 2 M ( M − M − (cid:88) U =1 M (cid:88) U = U +1 HD inter ( r U , r U ) n ·
100 % (5)For a truly random PUF output, HD inter should be close to
50 % .We implemented PUFs on three platforms: (i) Atmel ATMEGA328P, a lightweight bit MCU, (ii) ARM Cortex M4,a bit MCU, and (iii) Xilinx Zynq7010 SoC with re-programmable logic and a dual core ARM Cortex A9. Theseplatforms are ideally suited as sensor controllers (see Fig. 3) for a broad range of sensors.SRAM PUF taps the randomness from the start-up values of the SRAM cells. Once these start-up values are read out,the SRAM can be used as regular memory. As a result, SRAM PUF implementation does not incur any hardwareoverhead and is therefore preferred for implementation over the RO PUF. However, if SRAM is either not available onboard or gets initialized with fixed values during boot up, the RO PUF is implemented. The power-up state of the SRAMcells on the ATMEGA328P and the ARM Cortex M4 show PUF behavior where as the SRAM on the Zynq7010 SoCgets initialized with fixed values during boot up. Therefore, we implemented the SRAM PUF on the ATMEGA328Pand the ARM Cortex M4 MCUs and the RO PUF on the Zynq7010 SoC.For the SRAM PUF implementation, kilobytes SRAM on the Atmel ATMEGA328P and kilobytes SRAM on theARM Cortex M4 were read out and characterized for PUF behavior. The PUF quality parameters were computed from PUF responses (i.e., start-up values of the SRAM cells) obtained at room temperature. Figs. 9(a) and 9(b) depictthe error-rate and the uniform distribution of PUF-responses measured as HD intra and HW , respectively. The averageand maximum values of HD intra were measured as . and . for the ATMEGA328P, and .
66 % and .
16 % for the Cortex M4. The randomness of the PUF responses for the ATMEGA328P and the Cortex M4 were measured as . and .
96 % , respectively.Furthermore, we implemented the RO PUF, comprised of bit counters, on theFPGA part of the Zynq7010. ROs can be arranged into
RO independent pairs. To obtain a PUF response, aRO pair is selected. The frequencies generated by the selected ROs, increment the respective counters. The marginaldifference between the two frequencies causes one counter to overflow before the other. At the overflow of one counter,the bit value of the other counter is read out. Three bits at positions 8, 9, and 10 in the counter value are read outas the output of the RO pair comparison Kodytek and Lorencz [2015]. Therefore, RO pairs generate a bitresponse. We evaluated the RO PUF for HD intra , HW and HD inter . The quality parameters for the Zynq7010 werecomputed from a total of responses measured over a temperature range of − ◦ C (i.e., responses at ◦ PREPRINT - A
UGUST
28, 2018intervals, and an additional responses at ◦ C ). The average and maximum HD intra were computed as . and .
97 % , respectively. The average HW was .
95 % . Furthermore, the same PUF was implemented on Zynq7010boards, and HD inter was . .Instead of designing three separate helper data algorithms (HDAs) for correcting . , .
16 % , and .
97 % error-ratesof the three PUF, we designed a common HDA that can correct an error-rate of up to
10 % . Guajardo et al. Guajardoet al. [2007] investigated different error correcting codes that are suitable for HDA. The error-correcting code determinesthe number of required PUF response bits and hence the size of PUF. We evaluated two cases: (i) a simple code usingBCH (492,57,171) and (ii) a concatenated code comprising Reed Muller (16,5,8) and Repetition (5,1,5) codes. Thefailure rate for PUF-based key reconstruction using both these cases is ≤ − .Once an error correcting code ( n, k, d ) is selected, the number of PUF response bits required to generate an l bit key isgiven by nk · l . According to the PUF framework, the helper data W has the same length as the PUF response. Since, anRO pair generates response bits, ( nk · l ) bits are generated by ( n k · l ) RO pairs or ( n k · l + 1) ROs. In our design, eachRO is implemented as 3-stage (3 NOT gates), the RO PUF’s hardware size is given by ( nk · l + 3) logic-gates. For aconcatenated code ( n , k , d ) || ( n , k , d ) , the hardware and helper data sizes can be computed by using the sameprocedure, with n = n and k = k .In the trusted sensing and privacy-aware reporting approach, a sensor uses bit sk to sign the sensor readings(sensed data attestation) using PUF-based Cert-IBS with BLS as underlying signature scheme where as the secure node approach uses a bit k E to encrypt the frames using AES128 algorithm and a bit sk to sign the footage usingthe PUF-based Cert-IBS with BLS as underlying signature scheme. The PUF framework was implemented on all threeplatforms. In Table 3 we present the overhead incurred by the PUF framework to generate and bit keys, thelatency, hardware and memory components. PUF Source Key Length Error Correcting Hardware Latency StorageCode ( ≈ Logic Gates) (Key Extraction) (Helper Data W ) RO BCH 1108 bits RM || Rep 2051 ≤
100 ms 2048 bits160-bit
BCH 1384 bits RM || Rep 2563 2560 bits
SRAM
BCH NA bits RM || Rep NA ≈
30 ms 2048 bits160-bit
BCH NA bits RM || Rep NA bits Table 3: Implementation results of the PUF framework for bit and bit keys generation and storage
The trusted sensor prototype (Fig. 3) comprises an OV image sensor array (sensing unit) and Zynq7010 SoC(sensor controller) running at MHz. The sensing unit was configured for × resolution and Y U V color-space. The evaluation matrix for the trusted image sensor comprises three components of the overhead incurreddue to the PUF-based Cert-IBS, i.e., storage, latency, and hardware.During the enrollment phase of the privacy-aware trusted sensing scheme, TA binds a signing key sk to the trustedsensor using the PUF framework and provides a certificate on the public key pk . Using the asymmetric version of theBLS signature scheme as the SS in the PUF-based Cert-IBS, we obtain the sizes of sk = 160 bits and cert = 480 bits( pk = 160 bits, Sign msk = 320 bits). The RO PUF with BCH error correcting code based framework (summarized inTable 3) was implemented on the FPGA part of the Zynq7010 SoC. The PUF-based secure key generation and storageframework for a bit sk incurs bits of memory for the storage of helper data ( W ) and logic-gates ofhardware overhead.During the trusted sensing and privacy-aware reporting phase, the trusted sensor performs the sensed data attestation.During this step, a fresh image frame is read from the OV image sensor, a MAC checksum is computed over theframe using HMAC-SHA256 algorithm. The checksum is then signed using the PUF-based Cert-IBS with asymmetricversion of BLS as the underlying signature scheme. The sensor then outputs the tuple ( M, I, σ, cert ) .20 PREPRINT - A
UGUST
28, 2018 (a)(b)(c)
Figure 9: PUF characterization of (a) kB SRAM on Atmel ATMEGA328P 8-bit MCU, (b) kB SRAM on ARMCortex M4 32-bit MCU, and (c) RO PUF comprised of (HD intra ) representing thePUFs’ error-rate for (a) are . and . , and for (b) . and . , respectively. The randomness of PUFresponses, measured as the mean Hamming weight (HW) , for (a) and (b) are . and . , respectively. Thequality parameters for (c) are calculated from a total of 800 responses taken over temperature range − ◦ C (i.e.,100 responses at ◦ intervals plus a 100 at ◦ C ) where a response ≈ bits. The mean and maximum values of HD intra are . and . , respectively. The average HW is . PREPRINT - A
UGUST
28, 2018The storage, hardware and latency overhead incurred by the privacy-aware trusted sensing approach on the sensor issummarized as follows: • Storage.
The trusted sensor stores helper data W and certificate cert . For an l bit key, the helper data W size is given by nk · l , where n and k are parameters of the chosen error correcting code. For a bit keygeneration framework using BCH (492 , , , l = 160 bits, n = 492 and k = 57 , which gives the size of W = 1381 bits. The cert is given by bits. This amounts to a total of bytes of storage overhead. • Latency.
Key extraction using the PUF framework is performed only at the start-up and therefore run-timelatency overhead is only incurred by the sensed data attestation phase of PUF-based Cert-IBS scheme. Duringthis phase, the sensor controller obtains a new frame from the image sensor, computes a MAC checksumover the frame and signs the checksum using PUF-based Cert-IBS with BLS as underlying signature scheme.
Pairings based cryptography library Lynn [2016] was leveraged for implementation of sensed data attestation.The sensor controller, Zynq7010 SoC, requires . to MAC a frame and .
27 ms to sign the MAC at × resolution and YUV422 color-space, which enables the prototype trusted image sensor to secure frames per second. • Hardware.
The hardware overhead incurs only in case of an RO PUF implementation and results in logic-gates of hardware overhead to generate a bit signing key (cp. Table 3).
The evaluation of privacy-aware reporting of the trusted sensors readings aims to compute the communication overheadincurred by the proposed scheme on the user devices. For ease of description, the trusted sensing and privacy-awarereporting scheme was explained using the symmetric pairings settings (Table 2). However, symmetric pairing canonly be realized using supersingular elliptic curves. Supersingular elliptic curves E are defined over the finite field F q , where G and G T are the groups of elliptic-curve points on E ( F q ) and E ( F q d ) , respectively and d is called theembedding degree of E . After recent successful attacks on supersingular curves of small characteristic Barbulescuet al. [2013], the available supersingular curves, given by y = x + x over the field F q for some prime q = 3 (mod 4) ,have a small embedding degree of . This implies that the base field F q must be large enough to obtain sufficientdiscrete-log security in F q . For instance, to obtain bit discrete-log security in F q ( (cid:100) log q (cid:101) ≥ ), q mustbe at least 512 bits ( (cid:100) log q (cid:101) ≥ ). Since G ⊆ E ( F q ) , this gives us the size of a group element in G to be bits.Therefore, the communication overhead on a mobile device incurred by our scheme in the symmetric settings is givenby (6 Q + 12)512 bits.However, the same level of security can be achieved by more efficient curves using the asymmetric pairings setting.Therefore, we have implemented the proposed scheme using asymmetric pairings setting. First, we briefly outline ourscheme in the asymmetric settings to compute the overhead in terms of group elements and then we select the curves tocompute the communication overhead on the mobile device.In the asymmetric setting, the PUF-based Cert-IBS uses asymmetric version of BLS signature scheme. Here, the sizeof a signature, i.e., Sign sk ( M ) and Sign msk ( I, pk ) is given by one element in G whereas the public half of thesigning key, i.e., pk is given by one element in G . Proof generation uses the same framework, P NIWI , but follows theasymmetric construction based on the
SXDH assumption (cp. Section 9 of Groth and Sahai [2012]). Here, commitmentto the elements of witness in G and G costs G and G , respectively. The proof consists of two parts: π ∈ G ,and θ ∈ G . Given Q sensors’ readings with all signatures aggregated into ¯ σ , the prover P NI of P NIW I outputs ( { d ( I i ) , c ( pk i ) } Qi =1 , c (¯ σ ) , π, θ )) which amounts to an overhead of (6 + 2 Q ) G + (4 + 2 Q ) G .Barreto-Naehrig (BN) curves Barreto and Naehrig [2005] are ideal for an asymmetric setting as they offer much highersecurity with more efficient curves. An efficient BN curve Lynn [2016] allows us to represent elements of G with bits, G with bits and G T with bits. This offers bit of discrete-log security in G T . In the asymmetricsetting, the communication overhead is given as (6 + 2 Q )160 + (4 + 2 Q )320 bits.For a concrete comparison, we evaluated the communication overhead for two real-word participatory sensing (PS)applications: Google Street View
Google and
Wikicity
Lab [2015]. The former models a typical PS application thatrequires its participants to submit multiple sensor readings. Moreover, the payload includes multimedia data. The latermodels the worst-case scenario with respect to overhead, because it requires its participants to submit only a singlesensor reading (i.e., scalar value).
APP 1 : Google Street View
Google which requires the participants to capture and upload geotagged images to theGoogle Maps using their smartphone (user device) cameras and GPS receivers (sensors). Here Q = 2 , where M isimage from the camera and M location reading from the GPS receiver. Given a sensor configuration of × resolution, RGB color-space, bits per color-plane, image size M = 900 kilobytes is given. Typically, the GPS22 PREPRINT - A
UGUST
28, 2018
Module Runtime
Keys Extraction (PUF Framework) <
100 msEvent Detection 21.1 msFrame Encyption (AES128) † † † values are averaged over 100 framesTable 4: Running times for individual modules of application framework for the Zynq7010 SoC-based secure cameranodereceiver provides location data in NMEA format. The maximum length of an NMEA sentence ( M ) is bytes.Therefore, the total size of the payload (i.e., M + M ) is given by kilobytes (approximately). The communicationoverhead incurred on a smartphone for reporting ( M + M )to Google Street View server amounts to bytes thatis . of the payload. APP 2 : Wikicity
Lab [2015] is an urban planning application that periodically captures the location of the citizensleveraging smartphones and vehicles-embedded location sensors, to monitor their reactions to various events happeningin the city. Here, Q = 1 and M is bytes of location data. For Wikicity , the communication overhead amountsto bytes.
APP 1 models a typical multimedia report where as
APP 2 determines the minimum communicationoverhead incurred by the privacy-aware trusted sensing scheme on a user device.
This section presents the implementation results of the prototype secure camera node as discussed in Section 5.2. Thenode uses a bit key, k E , to encrypt each frame using AES128 and a bit key, sk to sign the footage using BLSsignature scheme. A RO PUF and BCH error correcting code based framework (Table 3) was implemented for thegeneration of sk and k E . The total overhead in terms of latency, storage, hardware and communications incurred due tosensing, processing, security and communication tasks of the secure node approach on the camera node is given below. • Latency.
Since key extraction and secure boot is performed only once at power-up, they do not incur latencyduring runtime. Time-stamping and signing are performed once per footage. At a resolution of × ,the runtime latency due to the application framework was measured as ms (i.e., frames per second).However, at the given resolution, the image sensor outputs frames per second, which limits the throughputof the prototype to frames per second. The running times measured for the individual tasks of the cameraapplication framework are given in Table 4. • Storage.
Breakdown of the storage overhead incurred by the individual components of the proposed securitymechanism is given by the second and third rows of Table 5. The size of the helper data corresponding to bit sk and bit k E using the BCH codes is given by bits and bits, respectively (Table 3). Thenode also needs to store cert which requires bits ( bytes).The Zynq7010 SoC stores three keys, which are used during the secure boot: a private bit key for AES256,a bit private key for HMAC-SHA , and a bit modular used by the RSA. The size of a signed andencrypted partition is same as the unencrypted and unsigned one. Therefore, the storage overhead incurred bythe secure boot functionality amounts to bytes. The Zynq7010 devices employ RSA for authentication ofthe boot partitions, which uses large key sizes ( bits). This can be replaced by a signature scheme basedon elliptic curves such as elliptic curve digital signature algorithm (ECDSA), which provides the same level ofsecurity with a bit key and produces much smaller signatures.In comparison to memory consumed by the application logic, the storage overhead of the secure nodemechanism is negligible. For example, the size of a double frame buffer (Table 5) used by the processingand communication tasks (Fig. 7) incur orders of magnitude greater storage overhead than the secure nodemechanism. • Hardware.
Hardware overhead incurs due to the RO PUF implementation in the FPGA part of the SoC.Table 3 gives the total hardware overhead for generating bit sk and bit k E to logic-gates, whichis negligible as compared to the current state of the art security chip solutions. • Communication.
Given an assisted living scenario, a camera with the proposed secure node mechanismuploads encrypted-MACed-timestamped-signed footage, i.e., { C i , τ, σ } i = i..N to the edge or cloud storage23 PREPRINT - A
UGUST
28, 2018
Module Memory
Double Frame-Buffer × kilobytesPUF Framework (Helper data) bytesFootage Signing ( cert ) bytesSecure Boot (RSA, AES, and HMAC keys) bytesTable 5: Memory consumption by the individual modules of the application framework for the Zynq7010 SoC-basedsecure camera nodeserver. The encrypted frame C i has the same size as the original image. A bit timestamp τ and a bit signature σ are added to the footage for uploading. For a footage comprised of N frames, the totalcommunication overhead incurred on the secure camera node amounts only to bits, which amounts to . of a single frame size. This section discusses the security and privacy properties of the two schemes namely trusted sensing and privacy awarereporting and secure node . We also identify the limitations of our approaches and the additional measures that can betaken to over the limitations.
The correctness of the trusted sensing and privacy-aware reporting scheme follows from the correctness of the PUF-based Cert-IBS scheme and the P NIWI system. The system parameters Σ , crs and H are generated by the trustedauthority. Since, any compromise to the trusted authority nullifies the trust and privacy guarantees, we emphasize thatthe offline nature of the authority in our scheme greatly reduces the risk of compromise. The security and privacyproperties of the privacy-aware trusted sensing scheme are as follows: • Sensor-bound Secure Key Storage.
The on-chip PUF serves a secure key storage for the sensor. The key isgenerated from the hardware on sensor power up. During the off state, the key exist in form of unreadablevariations introduced in the hardware by the CMOS manufacturing process. In comparison with a securememory alternative, PUF offers a cost effective and more lightweight secure storage. Moreover, the PUFframework binds the key to the sensor hardware, which never leaves the sensor thereby minimizing the risk ofa key compromise. • Trusted Sensing.
Trusted sensing refers to the sensing techniques that ensure security guarantees about theintegrity, authenticity and freshness of the sensed data. The scheme withstands the sensed data corruptionattacks due to compromised OS running on mobile device. The OS receives sensors’ readings accompanied bythe commitments and proofs of P NIWI from the root virtual machine (see Fig. 3). In order to inject fabricateddata at the OS level, an attacker has to produce a valid P NIWI proof of knowledge on a valid PUF-basedCert-IBS signature. A uf-cma secure PUF-based Cert-IBS implies that there is negligible probability that anattacker produces a valid PUF-based Cert-IBS signature. Further, the soundness of P NIWI implies that it isimpossible to generate a valid proof without satisfying the PUF-based Cert-IBS verification equations. Further,any manipulation in the values of readings, commitments or proofs results in unsuccessful P NIWI verification.Authenticity of each reading is ensured by signing the data with a unique, sensor-bound private key with-in thesensor. Freshness is ensured by including a time-stamp before signing the reading (sensed data attestation). Inthe participatory sensing scenario, freshness of sensed data is ensured by including a time-stamp, obtainedfrom the smartphone GPS receiver, before the signature aggregation step. • Privacy-Aware Reporting.
Anonymity of a user follows from the witness indistinguishability of P NIWI .Given N mobile devices equipped with the privacy-aware trusted sensing scheme, contributing sensed datato an application server, each device is N -anonymous with respect to the server. The scheme providesCPA-anonymity Groth [2007] since it does not provide the server oracle access to extract the witness from theproof using the extraction algorithm X NI of P NIWI . Since the server is assumed honest-but-curious threat toprivacy, the CPA-anonymity suffices the privacy requirements. • Tamper Resistance.
Any physical tampering is detected due to the on-chip PUF since the PUF extracts asensor fingerprint as a function of intrinsic details of the hardware. Any modification in hardware is detectedas it results in generation of incorrect key. 24
PREPRINT - A
UGUST
28, 2018 • Identity Management.
Every sensor is assigned a unique physical identifier which could be used to track andmonitor the state of a node, manage their access privileges in an all open network such as IoT.Next, we enlist some limitations of our scheme and identify some additional measures that may be appended with ourscheme to overcome these limitations: • Linkability and Location Privacy.
In order to ensure effective user privacy at the system level anonymizationat the lower layers of the communication stack must be ensured since multiple reports can be trivially linkedusing the IP address or physical address of a user device (e.g., smartphone). Techniques such as mix networks,onion routing, IP rotation, and MAC address randomization can be leveraged for this purpose. For locationprivacy, appropriate location blurring technique such as cloaking, perturbation etc. must be used. • Event Faking.
Event faking attacks refer to capturing of sensed data from the undesirable environment. Theproposed mechanism does not address these attacks. However, these can be thwarted by using statisticaltechniques on the server side.
Data and node security properties of the secure node scheme are as follows: • Secure Key Storage.
Secure storage of the keys is ensured by an on-chip PUF. On camera power up, thekeys are generated from intrinsic variations of the hardware structure and are loaded into cache. When thecamera device is off, the keys exist in form on unreadable variations introduced in the hardware by the CMOSmanufacturing process. Compared to secure memory alternatives, PUF offers much cheaper secure storage. • Platform-bound Keys.
The PUF framework binds signing and encryption keys to the camera platform. Thesigning key never leaves the platform thereby minimizing the risk of the key compromise. • Data Nonrepudiation.
All video data leaving the camera carry integrity and authenticity guarantees. Au-thenticity is ensured by signing the video data using platform bound signing key. PUF-based Cert-IBS withBLS Boneh et al. [2001] as underlying standard signature scheme is existentially unforgeable under chosenmessage attack (uf-cma secure), which is the standard security notion for a digital signature algorithm. Anymodification or fabrication of data during delivery or archival can be detected at the caretaker monitoringdevice. Spoofing using offline images can be addressed by using multiple sources for event detection, e.g.,on-camera motion detection and sound detection. • Data Confidentiality.
Privacy of the monitored individual(s) is ensured by end-to-end encryption of eachframe using AES128 algorithm.
Access Authorization.
Secure key exchange and key storage on caretaker monitoring device ensure that onlylegitimate caretaker can access the decrypted video data. • Data Protection Lifetime.
Data is protected close to the source (sensor) and the security guarantees on thedata remain valid for the entire lifetime of the data (i.e., during transmission, storage on cloud and delivery tomonitoring device of caretaker). On the monitoring device, the data is consumed after successful verificationof the security guarantees. • Camera Firmware Protection.
Each component of camera node’s boot-chain (i.e., BootROM Code, FSBL,FPGA Bitstream, U-Boot, OS and Apps) is hashed, signed and encrypted using the HMAC-SHA256, RSA,and AES256 algorithms. At every boot-up, the integrity and authenticity of camera firmware is verified bythrough signature verification of each partition. Furthermore, the encryption of the boot partitions ensuresunclonability of the camera firmware, since the decryption key exists within the security perimeter of thecamera node. No hardware without the decryption key is capable of decrypting the firmware. • Physical Security.
First, on-chip PUF offers resistance against hardware tampering of the camera hardware.Since the PUF extracts device fingerprint as a function of intrinsic details of the hardware, hardware tamperingis detected as it results in incorrect device fingerprint and keys. Second, in comparison with a TPM-basedsolution, where data is transferred from host processor to TPM chip (external to the host), where data protectionmechanisms are applied. This results in exposed interface with unprotected data which can be tapped to bypasssecurity mechanism. With incorporation of PUF in host SoC, these exposed interfaces are eliminated resultingin better physical security.Next, we enlist some limitations of our scheme and identify some additional countermeasures that may be appendedwith our scheme to overcome these limitations: 25
PREPRINT - A
UGUST
28, 2018 • Side Channel Attacks.
First, although security keys are generated securely inside the SoC, key generation onevery power-up opens up electromagnetic and power side channels. Analysis of the side channel informationcan result in partial recovery of keys at the hands of attackers. Approved effective techniques, masking (e.g,reversible process in which intermediate values of variables are randomized by masked with random numbers)and hiding (e.g., use of dual rail logic to flatten the data dependent leakage) can be leveraged to thwart sidechannel attacks. Second, data upload is triggered upon every event detection, the transmission pattern of thecamera opens up another side-channel that leaks e.g., whether or not someone is at home. Transmitting dummydata at random intervals is a simple countermeasure that can be added to the system to mitigate this threat. • Denial of Service.
Denial of service attacks by (i) the cloud, such as deleting the archived data, blockingthe downloads or (ii) a third party, such as corrupting the video or control data in transit or storage are notaddressed by this scheme. • Monitoring Device Security.
The scheme uses symmetric key encryption (i.e., identical keys for encryptionand decryption) since symmetric encryption is orders of magnitude faster and less power hungry than anasymmetric key encryption. However, symmetric key encryption requires secure key exchange between thecamera and monitoring device and secure storage of key in the monitoring device. Key exchange is done by thecaretaker in a private space using a local interface, so the risk of key is relatively low. On monitoring devicewith untrusted software stack, virtualization, secure key vault or PUF can be used to securely store the key.To summarize, incorporation of trusted sensing and privacy-aware reporting solution in the mobile devices can protectthe participatory sensing applications against data pollution and personal privacy leakage attacks by ensuring threesecurity guarantees: First, the sensed data remotely collected from ubiquitous commodity sensing devices carry integrity(i.e., no malicious manipulation of the data), authenticity (i.e., the data is originated from an authentic hardwaresensor), and freshness (i.e., the data is recent and no adversary replayed an old message). Second, trusted sensingand privacy-aware reporting scheme ensures anonymization and unlinkability of multiple submissions from the samedevice. Anonymization is given by the number of devices registered with or contributing to the application server. Thisprevents the server from creating personal profiles of the participating individuals. Incorporation of effective privacyprotection mechanism would encourage increased user participants in the participatory sensing applications which iscurrently hindered due to privacy concerns. Third, the scheme improves the physical security of the sensors. On-chipPUF enables resistance against hardware tampering of the sensors.The trusted sensing and privacy-aware reporting approach incurs bytes of storage, . ms of latency and logic-gates of hardware overhead on a sensor. To generalize, the latency overhead depends on the sensor clock andnature of the sensed data whereas the storage and hardware costs are fixed for all sensors. The communication overheadon the mobile device, for submitting Q readings to an application server, is given by (6 + 2 Q )160 + (4 + 2 Q )320 bits.Participatory sensing application scenario was merely considered to illustrate the approach. However, the solutionaddresses all IoT applications that collect raw sensed data from commodity sensing devices to centralized server(s) andprocessing of the data takes place at the application server(s).Implementation of the secure node approach in visual monitoring applications thwarts the illegitimate data access and/ormanipulation and illegitimate control (over nodes and/or data) threats in these applications. The mechanism ensuresintegrity, authenticity, freshness, and confidentiality of the sensed data meanwhile allowing for processing of the data onthe nodes. Threats to the personal privacy of the monitored individuals, either from a malicious application server or anunknown adversary, are addressed by ensuring data confidentiality and access authorization. The solution also protectsthe camera hardware and software against hardware tampering, firmware manipulation, and firmware cloning attacks.This is particularly useful in cases where camera (or other sensor) nodes are deployed in unguarded open spaces.The secure node approach for assisted living scenario incurs bytes of storage, ms of latency, logic-gates ofhardware, and bits of communication overhead on a camera node, configured for × YUV422 format. Thelatency overhead depends on the node clock, type of sensor whereas the storage, hardware, and communication costsare same for any sensor node.The secure node ensures sensed data, personal privacy, and sensor node protection for the IoT applications. The securitysolution offers more flexibility as compared to trusted sensing and privacy-aware reporting, as it additionally allows forthe processing of sensed data on the sensor nodes. The solution is ideally suitable for visual monitoring applicationshowever, it can be leveraged for other IoT applications that require processing of the sensed data locally on the sensornodes. 26
PREPRINT - A
UGUST
28, 2018
Sensors are the largest and the most common source of data for the IoT-based smart services. Trustworthiness of senseddata and users’ privacy are essential security requirements for these services. We presented lightweight and effectiveapproach to protect the sensor nodes, sensed data and users’ privacy for applications that require delivery of raw datato the server for processing and applications that require processing of sensed data locally on the sensor nodes. Bothsolutions ensure integrity, authenticity and freshness of sensed data, integrity of sensor hardware and usage privacy ofthe sensors. The scheme does not require any additional secure hardware and can be mapped on to the existing sensors’resources. Experimental evaluation of both solutions revealed that the proposed scheme incurs only insignificant latency,storage, hardware and communication overhead on the sensor nodes. Using the proposed sensors can effectively thwartsensed data pollution and privacy leakage attacks.However, there are a number of open challenges that can be addressed as extension of this work. First open researchproblem is the privacy versus utility trade-off. During the privacy preservation process, the utility of the data diminishesas sensitive information such as the uniquely identifying information is removed, transformed, or distorted to achieveanonymity or confidentiality. Identifying the equilibrium between sensed data privacy and utility in IoT applicationsis an open challenge. Towards this goal, exploring other nuances of privacy between end-to-end encryption and fullaccess could provide interesting insight.Second, the trusted sensor and secure node prototypes only demonstrate the proof of the concept however they werenot optimized for performance. A number of performance improvements can be made in this regard. Considering theresource constraints of the sensors, further lightweight security primitives can be investigated. For instance, digitalsignature schemes require extensive resources that might not be economically feasible for many resource constrainedIoT devices. Identification of lightweight security mechanisms to ensure integrity and authenticity guarantees is anotheropen challenge in this regard.
References
J. Farrell and M. Barth.
The Global Positioning System & Inertial Navigation , volume 61. Mcgraw-Hill New York, NY,USA:, 1999.Fueled. Paying with my phone, 2018. https://fueled.com/blog/10-ways-to-pay-with-your-smartphone/ .Accessed 18 May 2018.R. Clayton, T. Heaton, M. Chandy, A. Krause, M. Kohler, J. Bunn, R. Guy, M. Olson, M. Faulkner, and M. et al. Cheng.Community seismic network.
Annals of Geophysics , 54(6):738–747, 2012.J. Carrapetta, N. Youdale, A. Chow, and V. Sivaraman. Haze watch project, 2010. . Accessed 06 August 2018.T. Das, P. Mohan, V. Padmanabhan, R. Ramjee, and A. Sharma. PRISM: platform for remote sensing using smartphones.In
Proceedings of the 8th International Conference on Mobile systems, applications, and services , pages 63–76.ACM, 2010.M. Schoenfeld, S. Compton, H. Mead, D. Weiss, L. Sherfesee, J. Englund, and L.R. Mongeon. Remote monitoringof implantable cardioverter defibrillators: A prospective analysis.
Pacing and clinical electrophysiology , 27(6p1):757–763, 2004.eCAALYX. Enhanced Complete Ambient Assisted Living Experiment, 2015. URL http://ecaalyx.org/ecaalyx.org/index.html . [Accessed: 17-May-2018].T. Denning, A. Andrew, R. Chaudhri, C. Hartung, J. Lester, G. Borriello, and G. Duncan. BALANCE: towards a usablepervasive wellness application with accurate activity inference. In
Proceedings of the 10th Workshop on MobileComputing Systems and Applications , page 5. ACM, 2009.Tesla. Self driving car demo, 2018. https://vimeo.com/192179727 . Accessed 18 May 2018.M.R. Hafner, D. Cunningham, L. Caminiti, and D. Del Vecchio. Cooperative collision avoidance at intersections:Algorithms and experiments.
IEEE Transactions on Intelligent Transportation Systems , 14(3):1162–1175, 2013.B. Lightner, D. Borrego, C. Myers, and L.H. Lowrey. Wireless diagnostic system and method for monitoring vehicles,October 21 2003. US Patent 6,636,790. 27
PREPRINT - A
UGUST
28, 2018P. Varaiya. Smart cars on smart roads: Problems of control.
IEEE Transactions on automatic control , 38(2):195–207,1993.D. Cook, M. Youngblood, E. Heierman, K. Gopalratnam, S. Rao, A. Litvin, and F. Khawaja. MavHome: Anagent-based smart home. In
Proceedings of the First IEEE International Conference on Pervasive Computing andCommunications , pages 521–524. IEEE, 2003.J. Ballesteros, M. Rahman, B. Carbunar, and N. Rishe. Safe cities: A participatory sensing approach. In
Proceedings ofthe IEEE 37th Conference on Local Computer Networks , pages 626–634. IEEE, 2012.L. Atzori, A. Iera, and G. Morabito. The internet of things: A survey.
Computer Networks , 54(15):2787–2805, 2010.S. Saroiu and A. Wolman. I am a sensor, and I approve this message. In
Proceedings of the Eleventh Workshop onMobile Computing Systems & Applications , pages 37–42. ACM, 2010.G-DATA. Mobile Malware Report 2017. . Accessed 17 May 2018.eSecurity Planet. Inside the Bluebox Android Master Key Vulnerability, 2015. . Accessed 17May 2018.Bluebox Security. Android FakeID Vulnerability, 2015. . Accessed 17 May 2018.H. Liu, S. Saroiu, A. Wolman, and H. Raj. Software abstractions for trusted sensors. In
Proceedings of the 10thinternational conference on Mobile systems, applications, and services , pages 365–378. ACM, 2012.T. Winkler and B. Rinner. Security and Privacy Protection in Visual Sensor Networks: A Survey.
ACM ComputingSurveys , 41(1):1–38, 2014.I. Haider and B. Rinner. Securing cloud-based iot applications with trustworthy sensing. In
Proceedings of the 2nd EAIInternational Conference on Cloud, Networking for IoT , pages 218–227. Springer, 2017a.I. Haider and B. Rinner. Private Space Monitoring with SoC-Based Smart Cameras. In
Proceedings of the 14thInternational Conference on Mobile Ad-Hoc and Sensor Systems (MASS) , pages 19–27. IEEE, 2017b.A. Dua, N. Bulusu, W-C Feng, and W. Hu. Towards trustworthy participatory sensing. In
Proceedings of the 4thUSENIX Conference on Hot topics in security , pages 8–18. USENIX Association, 2009.K. Dietrich and J. Winter. Implementation aspects of mobile and embedded trusted computing. In
Proceedings of theInternational Conference on Trusted Computing , pages 29–44. Springer, 2009.N. Aaraj, A. Raghunathan, and N. Jha. Analysis and design of a hardware/software trusted platform module forembedded systems.
ACM Transactions on Embedded Computing Systems (TECS) , 8(1):1–31, 2008.T. Winkler and B. Rinner. TrustCAM: Security and privacy-protection for an embedded smart camera based on trustedcomputing. In
Proceedings of the IEEE Conference on Advanced Video and Signal-based Surveillance , pages593–600, 2010.T. Winkler, A. Erdélyi, and B. Rinner. TrustEYE. M4: Protecting the sensor–Not the camera. In
Proceedings of theIEEE Conference on Advanced Video and Signal-based Surveillance , pages 159–164, 2014.A. Erdélyi, T. Barat, P. Valet, T. Winkler, and B. Rinner. Adaptive Cartooning for Privacy Protection in CameraNetworks. In
Proceedings of the IEEE Conference on Advanced Video and Signal-based Surveillance , pages 44–49,2014.M. Potkonjak, S. Meguerdichian, and J. Wong. Trusted sensors and remote sensing. In
Proceedings of the 9th AnnualIEEE Conference on Sensors , pages 1104–1007. IEEE, 2010.C. Cornelius, A. Kapadia, D. Kotz, D. Peebles, M. Shin, and N. Triandopoulos. Anonysense: Privacy-aware people-centric sensing. In
Proceedings of the 6th International Conference on Mobile systems, Applications, and Services ,pages 211–224. ACM, 2008. 28
PREPRINT - A
UGUST
28, 2018E. De Cristofaro and C. Soriente. Short paper: Pepsi—privacy-enhanced participatory sensing infrastructure. In
Proceedings of the fourth ACM Conference on Wireless Network Security , pages 23–28. ACM, 2011.T. Dimitriou, I. Krontiris, and A. Sabouri. Pepper: a querier’s privacy enhancing protocol for participatory sensing. In
Proceedings of the International Conference on Security and Privacy in Mobile Information and CommunicationSystems , pages 93–106. Springer, 2012.K. Rosenfeld, E. Gavas, and R. Karri. Sensor physical unclonable functions. In
Proceedings of the InternationalSymposium on Hardware-Oriented Security and Trust (HOST) . IEEE, 2010.Y. Cao, L. Zhang, S. Zalivaka, C-H Chang, and S. Chen. CMOS image sensor based physical unclonable function forcoherent sensor-level authentication.
IEEE Transactions on Circuits and Systems , 62(11):2629–2640, 2015.Wired. iphone tracks your every move, and there’s a map for that, 2011. . Accessed 14 June 2018.J. Groth and A. Sahai. Efficient noninteractive proof systems for bilinear groups.
SIAM Journal on Computing , 41(5):1193–1232, 2012.X. Chen, T. Garfinkel, C. Lewis, P. Subrahmanyam, C. Waldspurger, D. Boneh, J. Dwoskin, and D. Ports. Overshadow:A virtualization-based approach to retrofitting protection in commodity operating systems. In
Proceedings of the 13thInternational Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) ,pages 2–13. ACM, 2008.J. Brakensiek, A. Dröge, M. Botteck, H. Härtig, and A. Lackorzynski. Virtualization as an enabler for security in mobiledevices. In
Proceedings of the 1st Workshop on Isolation and Integration in Embedded Systems , pages 17–22. ACM,2008.P. Tuyls and L. Batina. RFID-tags for Anti-Counterfeiting. In
Proceedings of Topics in Cryptology - CryptographersTrack at the RSA Conference , pages 115–131. Springer, 2006.M. Bellare, C. Namprempre, and G. Neven. Security proofs for identity-based identification and signature schemes.
Journal of Cryptology , 22(1):1–61, 2009.D. Boneh, X. Boyen, and H. Shacham. Short group signatures. In
Proceedings of the Annual International CryptologyConference , pages 41–55. Springer, 2004.B. Treece. CPU or FPGA for image processing: Which is best?, 2017. . Accessed 18 May 2018.R. Collins, A. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt,and L. Wixson. A system for video surveillance and monitoring. Technical Report CMU-RI-TR-00-12, RoboticsInstitute, Carnegie Mellon University, 2000.D. Boneh, B. Lynn, and H. Shacham. Short signatures from the Weil pairing. In
Proceedings of the InternationalConference on the Theory and Application of Cryptology and Information Security , pages 514–532. Springer, 2001.L. Sanders. Secure boot of Zynq-7000 all-programmable SoC.
Application Note XAPP1175, Xilinx , 2013.R. Maes.
Physically unclonable functions: Constructions, properties and applications . PhD thesis, University of KULeuven, 2012.J. Rajendran, J. Tang, and R. Karri. Securing pressure measurements using SensorPUFs. In
Proceedings of theInternational Symposium on Circuits and Systems (ISCAS) , pages 1330–1333. IEEE, 2016.F. Kodytek and R. Lorencz. A Design of Ring Oscillator Based PUF on FPGA. In
Proceedings of the 18th InternationalSymposium on Design and Diagnostics of Electronic Circuits & Systems , pages 37–42. IEEE, 2015.J. Guajardo, S. Kumar, G.-J. Schrijen, and P. Tuyls. Physical unclonable functions and public-key crypto for FPGA IPprotection. In
Proceedings of the International Conference on Field Programmable Logic and Applications , pages189–195. IEEE, 2007.B. Lynn. The Pairing-Based Cryptography Library, 2016. URL https://crypto.stanford.edu/pbc/ . [Accessed:17-May-2018]. 29
PREPRINT - A
UGUST
28, 2018R. Barbulescu, P. Gaudry, A. Joux, and E. Thomé. A quasi-polynomial algorithm for discrete logarithm in finite fieldsof small characteristic. Cryptology ePrint Archive, Report 2013/400, 2013. URL http://eprint.iacr.org/2013/400 .P. Barreto and M. Naehrig. Pairing-friendly elliptic curves of prime order. In
Proceedings of the International Workshopon Selected Areas in Cryptography , pages 319–331. Springer, 2005.Google. Street View, 2016. . Accessed17 May 2018.MIT Senseable City Lab. WikiCity Rome, 2015. URL http://senseable.mit.edu/wikicity/rome/ . [Accessed:17-May-2018].J. Groth. Fully anonymous group signatures without random oracles. In