PUF-FSM: A Controlled Strong PUF
XXXXX 1
PUF-FSM: A Controlled Strong PUF
Yansong Gao and Damith C. Ranasinghe
Abstract —Physical unclonable functions (PUF), as hardwaresecurity primitives, exploit manufacturing randomness to extractinstance-specific challenge (input) response (output) pairs (CRPs).Since its emergence, the community started pursuing a strongPUF primitive that is with large CRP space and resilient tomodeling building attacks. A practical realization of a strongPUF is still challenging to date. This paper presents the PUFfinite state machine (PUF-FSM) that is served as a practical controlled strong PUF. Previous controlled PUF designs have thedifficulties of stabilizing the noisy PUF responses where the errorcorrection logic is required. In addition, the computed helperdata to assist error correcting, however, leaks information, whichposes the controlled PUF under the threatens of fault attacksor reliability-based attacks. The PUF-FSM eschews the errorcorrection logic and the computation, storage and loading ofthe helper data on-chip by only employing error-free responsesjudiciously determined on demand in the absence of an ArbiterPUF with a large CRP space. In addition, the access to thePUF-FSM is controlled by the trusted entity. Control in meansof i) restricting challenges presented to the PUF and ii) furtherpreventing repeated response evaluations to gain unreliabilityside-channel information are foundations of defensing the mostpowerful modeling attacks. The PUF-FSM goes beyond authen-tications/identifications to such as key generations and advancedcryptographic applications built upon a shared key.
Index Terms —Physical uncloanble function, APUF, error-freeresponses, statistical model, modeling attacks, fault attacks.
I. I
NTRODUCTION
Physical unclonable function (PUF), as a hardware securityprimitive, exploits manufacturing variations to extract secretson demand [1], [2]. PUFs are increasingly adopted to providesecurity for pervasive and ubiquitous distributed resource-constraint smart Internet of Thing (IoT) devices as an alter-native to storing a digital secret in the non-volatile memory(NVM). In fact, digital keys in the NVM is vulnerable tovarious attacks, especially, when there is no dedicated room incost sensitive IoT devices to implement expensive protectionmechanisms. By using a PUF, there is no digital secret—must be securely stored—involved, the hardware itself is thesecret key that is originated from true randomness, therefore,the secret cannot be duplicated and holds higher resistance toattacks, especially invasive attacks [3].Since the first silicon PUF, Arbiter PUF (APUF), beingcoined in 2002 [4], the PUF community has never stoppedpursuing on the so-called strong PUFs that not only have alarge challenge response pair (CRP) space but also resilientto modeling attacks. Applications of the strong PUF rangefrom elementary identifications and authentications to keygenerations and more advanced cryptographic protocols suchas key exchange and oblivious transfer [5]. Though there does
Y. Gao and D. C Ranasinghe are with the Auto-ID Labs, School ofComputer Science, The University of Adelaide, SA 5005, Australia. e-mail: { yansong.gao, damith.ranasinghe } @adelaide.edu.au. exist strong PUFs such as the Optical PUF [6] and the SHICPUF [7], a practical and lightweight strong PUF realizationseamlessly compatible with current CMOS technology turnsout to be challenging [8] in front of modeling attacks such aslogistic regression (LR) and recently revealed more powerfulCovariance Matrix Adaptation Evolution Strategy (CMA-ES)attacks, which have broken previously deemed practical strongPUFs including XOR-APUF, Feedforward APUF, LightweightSecure PUF [9], [10], [11] and even Slender PUF [12], [13].Yu et al. [14] recently presented a practical strong PUFthrough upper-bounding the available number of CRPs byan adversary. In this work, gaining new CRPs materials hasto be implicitly authorized by the trusted entity, the conceptof limiting access to CRPs is alike to controlled PUFs [3],detailed in Section II-B. Yu et al. [14] further introduce a PUFdevice side nonce to prevent fault attacks or noise side-channelinformation based attacks [12], [13].We continue the efforts into pursuing a practical andlightweight strong PUF coined as the PUF-FSM. For au-thentication, the PUF-FSM gets around one major limitationof [14] in terms of available secure authentication times(rounds). Beyond authentication, it enables key generation,key exchange and more advanced cryptographic applicationswith no reliance on on-chip ECC and the associated helperdata . Eventually, the PUF-FSM is a practical controlled PUFrealization. Contributions of our work are fourfold: • We present a practical and lightweight strong
PUF re-alization termed as PUF-FSM, also a controlled strongPUF, enabling a wide spread of applications. • We, for the first time, eschew the ECC and helper data tobuild a controlled PUF. We only employ large number ofavailable error-free responses in absence of the APUFs. • We post-process the responses to prevent traditional ma-chine learning attacks such as LR that usually requiresdirect relationship between challenge and response. • We prevent noise side-channel information based attacks(fault attacks) such as the CMA-ES attacks by usingdevice side nonce inherited from [14] to disable observingrepeated evaluated responses or outputs when the samechallenge is maliciously applied.Section II introduces related work, especially, judiciously se-lection of error-free responses from a statistical APUF model.Section III details the PUF-FSM design and analyzes itssecurity; Wide spread of applications of the PUF-FSM arepresented in Section IV; Section V concludes this paper.II. R
ELATED W ORK
A. APUF Model for Error-Free Response Generation1) Modeling APUF:
The APUF consists of k stages oftwo 2-input multiplexers as shown in Fig. 1, or any other a r X i v : . [ c s . CR ] J a n XXX 2
Fig. 1. An arbiter PUF (APUF) circuit. units forming two signal paths. To generate a response bit,a signal is applied to the first stage input, while the challenge C determines the signal path to the next stage. The inputsignal will race through each multiplexer path (top and bottompaths) in parallel with each other. At the end of the APUFarchitecture, an arbiter, e.g., a latch, determines whether thetop or bottom signal arrives first and hence results in a logic‘0’ or ‘1’ accordingly.It has been shown that an APUF can be modelled via alinear additive model because a response bit is generated bycomparing the summation of each time delay segment in eachstage (two 2-input multiplexers) depending on the challenge C , where C is made up of ( c || c || ... || c k ) [9], [15], [10]. Thenotations in this section following [9], [10]. The final delaydifference t dif between these two paths is expressed as: t dif = ω T Φ , (1)where ω and Φ are the delay determined vector and theparity vector, respectively, of dimension k + 1 as a function of C . We denote σ / i as the delay in stage i for the crossed ( c i =1 ) and uncrossed ( c i = 0 ) signal path through the multiplexers,respectively. Hence σ i is the delay of stage i when c i = 1 ,while σ i is the delay of stage i when c i = 0 . Then ω = ( ω , ω ... ω k , ω k +1 ) T , (2)where ω = σ − σ , ω i = σ i − + σ i − + σ i − σ i for all i = 2 , ..., k and ω k +1 = σ k + σ k , also Φ ( C ) = ( Φ ( C ) , ..., Φ k ( C ) , ) T , (3)where Φ j ( C ) = Π ki = j (1 − c i ) for j = 1 , ..., k .
2) Reliable Response Determination:
Suppose the ω isknown, given a challenge C , the Φ ( C ) is determined. Thenthe t dif is calculated. Noting that t dif eventually comprisestwo important useful information: i) sgn( t dif ) determines thebinary response; ii) the reliability of this response. If the t dif isfar alway from zero, then this gives such a challenge with fullconfidence to reproduce the response without any erroneous.In practice, physically measuring the t dif is hard, if notpossible. Xu et al. [16] recently exploit machine learningtechniques, specifically Support Vector Machine (SVM), tolearn the ω using a small collection of CRPs. Once an accurate ω is learned through the SVM, the t dif is able to be accuratelypredicted given an unseen C . Then the corresponding responseand its associated reliability is judiciously determined. If achallenge results in a t dif that is far way from zero, then its cor-responding response is error-free. Xu et al. [16] demonstrate PUF control logic
ECC helper datachallenge response
Fig. 2. Generalized controlled PUF construction. that almost 80% randomly given challenges guarantee error-free responses across a wide range of operating conditions(temperature, voltage) as well as considering aging effects.However, exploiting those error-free responses, especially,in a secure manner was not considered. We take the first steptowards to securely exploiting those error-free responses toconstruct a strong PUF
B. Controlled PUF
The controlled PUF [17], [3] proposed by Gassend et al. isa strong PUF construction. A controlled PUF is a PUF that iscombined with a control logic limiting the ways in which thePUF can be evaluated. In general, without permission from atrusted entity, the controlled PUF is locked, no response willbe meaningfully evaluated. When a user is authorized a CRP,more CRPs can be extracted. This is alike key management,where more session keys can be derived from a master key. Inpractice, the controlled PUF is built as a means that the PUFand its control logic play complementary roles. As illustratedin Fig. 2, the PUF prevents invasive attacks on the controllogic, at the same time, the control logic protects the PUFfrom protocol level attacks. For example, the APUF delaywires wrap the control logic. If invasive attacks attempt toprobing the control logic, it is more likely that the PUF secretwill be altered and damaged. The control logic halts adaptivelyevaluations on PUFs with no permission from the trustedentity.The responses in the controlled PUF have to be post-processed, e.g., hashed. Previous works [17], [3] usually heldan assumption that the error correction code (ECC) logicand the associated helper data are default parts of a PUF.In practice, the ECC logic and storing of helper data arealways expensive, especially for most low-end IoT devices.In addition, availability of the helper data is a non-trivial taskin practice, especially when the key renewal is occurred. Inthis context, the user randomly picks up a seed challengeand queries the output—e.g., hashed responses—from the con-trolled PUF. Fully characterization of all possible CRP givena PUF that has exponential number of CRPs, in particularthe popular employed APUF, and sequentially computing allpossible helper data is infeasible. The helper data given theuser randomly chosen challenges cannot be always guaranteed.Most importantly, usage of helper data poses the controlledPUF under potential threaten from modeling attacks exploitingnoise side-channel information leakage [18], [19], [13].The PUF-FSM is the first practical controlled PUF withoutusing ECC along with the associated helper data and with an
XXX 3
PUF FSM hash S OE C key RRNG noncecontrol logic
Fig. 3. General structure of the PUF-FSM. Only the correct sequentialchallenges produced R can unlock the S OE . If the enable signal S OE is disabled, the hash output is meaningless by presenting random values.Otherwise, the key is generated based on part of the response R , R secret , and the nonce, where key= HASH ( R secret , nonce). explicit countermeasure to reliability-based fault attacks. C. Finite State Machine (FSM)
Finite state machine (FSM) is a popular sequential logic. Ina FSM, the next state depends on both the input (transactionedge) and the current state. The FSM has been employedfor IC active metering [20], [21], [22]. In this context, theFSM combines with a unique chip identifier, usually a weakPUF, where PUF responses act as transaction edges to unlocka function such as an Intellectual Property (IP). The PUFresponse given a challenge, here, act as a secret key. Previousworks [20], [21], [22] extract a constant secret or a key fromthe noisy PUF responses. Please note that requirements on theon-chip ECC and helper data are still existed.Our work employs the FSM as a control logic to realizethe said controlled PUF. We release large number of challengesecret pairs. Neither ECC nor the helper data is necessary. Be-yond IC active metering, our work enables authentication, keygeneration, key exchange and more advanced cryptographicprotocols where a shared secret is required.III. PUF-FSM: D
ESIGN AND S ECURITY A NALYSIS
A. PUF-FSM Structure
The PUF-FSM structure is generalized in Fig. 3. It consistsof a PUF, a FSM, a hash and a random number generator(RNG) block. Similar to priori work [14], the direct PUFresponses can only be evaluated by the trusted entity in asecure environment to build APUF statistical model(s), andthe direct access is destroyed afterwards, e.g., through fusingthe wire.During deployment, a set of n sequential challenges, C set , isissued by the trusted entity, e.g., the server, the correspondingerror-free responses R with length n is produced. The R issequentially fed into the FSM controlling the transitions of theFSM states. Note that before the operation, the FSM resets to S . Only a series of correct TR —sub-response enabling thestate traverse from the current state to the next state—is ableto guarantee the FSM transitioning into the S OE that is an acti-vation to unlock the key output. In this context, only the serverwho owns the statistical APUF model is capable of issuing acorrect challenge set, C set to unlock the S OB to generate ameaningful output as a key. The key is HASH ( R secret , nonce),the R secret is partial of R and formation of R secret will be S S S S
131 2 TR level=3depth TR TR TR S S S S
333 4 TR TR TR TR S S S S
535 6 TR TR TR TR S OE D =3 L Fig. 4. FSM example with five levels ( L = 5 ) and three depths ( D = 3 ).When the transition edge TR ld , eg., 1100, is fed, current state S (l − , eg., S , transitions into S ld , eg., S . The applied TR ld remains the FSM at itscurrent state, marked by the returning arrow. S S S S S S S S S OE n bits R TR TR TR TR TR TR TR TR TR TR TR TR S S S R secret key= R secret , nonce)( hash Fig. 5. Part of n -bit R , R secret is hashed to generate the key. All rest bitsafter reaching the enable signal S OE are not contributing to the key. Notethat the FSM example in Fig. 4 is used for state traverse illustration that ismarked by the dotted red line. described and clear soon. Whenever the S OE is disabled, theoutput presents random values.An exemplary FSM construction is depicted in Fig. 4. Atthe beginning of the PUF-FSM operation, the FSM resets toits initial state S . Let’s assume that the TR is 0110, then S −−−→ S . Similarly if the TR is 0001, then S −−−→ S .If TR is from none of { , , } , or in other wordsthe TR is fed, then S TR −−→ S . Note that when the TR is fed,the FSM remains at its current state. In this case example, foreven states S , S , S , the TR l having D transition edges thatcan lead it to any of the following D states, rest TR l have onlyone correct transition edge that leads it to the following state.Though other FSM structures can be envisioned, the FSMin our proposal has L —always an odd number—internal statelayers (levels); each odd internal layer has D parallel states. Aconstant number, L + 1 , of TR l is a must to reach to the S OE .Both the TR l and TR l are 4-bit in this case example, therefore,the number of TR l , and the maximum number of TR l , n lmax ,in together is n , where we assume that the n is always amultiplication of 4 for convenience. In practice, the S OE canbe activated in a way by applying L +1 TR l and n l TR l , notingthat n l ≤ n lmax . The meaningful key will be given only afterall n bits in R are fed into FSM—or n clock cycles past– and the S OE is activated/reached. The key is a hash functionof part of the R that is the all sequentially fed L + 1 TR l and n l TR l . An example illustration of the key formation is shownin Fig. 5—the state traverse path is illustrated in Fig. 4 inthe dotted red line. Once the S OE is reached, the rest TR l areneglected—will not be hashed to generate the key. It is worthto stress again that the rest response bits are still fed into theFSM as redundant bits to hide the length of R secret . XXX 4 Device Nonce:
The device nonce is exploited to pre-vent observing repeatedly evaluated responses given the samechallenge [12], [13], [19]. The security rationale shall beclear in Section III-B. Nonce is part of the key, where thekey=
HASH ( R secret , nonce). It is reminded that the key willdiffer under each evaluation considering the freshed nonceeven the same C set issued by the trusted entity is repeatedlyapplied. The nonce is visible, the security relies on the R secret . Design Highlights: (1) Only under a correct set ofsequential challenges, C set , the final state S OE of the FSMcan be reached or activated; (2) the number of TR l , n l , beforereaching the S OE and the number of TR l , n lmax − n l , afterreaching the S OE are flexible configured that is controlledand only known by the trusted entity; (3) a meaningful key ispresented only when the S OE is activated and all n error-freeresponse bits are fed into the FSM. If the S OE is disabled,a random value is presented; (4) device nonce is employedto prevent repeatedly responses’ observations given the samemaliciously applied challenge. B. Security Analyses1) Adversary Model:
We consider the same assumption forcontrolled PUFs [17], [3] that physical attacks on the controllogic is more likely to alter or even destroy the PUF itself.The adversary can eavesdrop the communication channel andarbitrarily apply challenges to the PUF-FSM input to observethe PUF-FSM output. Furthermore, the nonce is visible. Theadversary attempts to obtain useful information to learn theAPUF model in the PUF block.
2) Brute-force Attacks:
As for an adversary, the proba-bility of discovering a meaningful key through guessing acorrect C set without the assistance from the trusted entity isexpressed: Probability = ( D n TR ) L +12 × ( 12 n TR ) L +12 , (4)where the n TR is the length of TR l . In the case example ofFig. 4, the n TR is four. For each even layer, the probabilityof guessing one correct transition edge is ( D n TR ) , while theprobability of guessing a correct transition edge for given anold layer is ( n TR ) .The brute-force attack becomes computationally infeasibleas the FSM state layer L or the n TR increases. In addition,even an adversary luckily guesses a correct C set that unlocksthe S OE , (s)he is actually incapable of recognizing it. Outputfrom the PUF-FSM looks random to the adversary undereach evaluation without prior knowledge of a correct C set attributing to the refreshed nonce.
3) Modeling Attacks:
The plausible attacks on strong PUFsare modeling attacks. Numerous works [9], [10], [11], [12],[13] have shown the vulnerability of the strong PUFs tomodeling attacks. Those deemed but later breakable strongPUFs include XOR-APUF, Feedforward APUF, LightweightSecure PUF and even Slender PUF.In PUF-FSM, arbitrarily CRP collection is disabled byany party except the trusted entity during the secure en-rollment phase. After the enrollment phase, the response isnever directly exposed unless hashing and its usage is further controlled by the FSM. The control logic as shown in Fig. 3first protects the underlying APUF(s) from modeling attackssuch as LR and SVM where knowledge of responses isnecessary [9], [10].As to perform recent revealed modeling attacks exploit-ing the helper data information [19], [13], in other words,the unreliability information of a given CRP, knowledge ofwhich challenge is unreliable is a premise. Unlike traditionalmodeling attacks, e.g, LR, reliability-based fault CMA-ESattacks [13] do not require the knowledge of the response valuefor a given challenge. Such a powerful CMA-ES attack eventhreatens the security of a controlled PUF that employs thehelper data. In our PUF-FSM, there is no helper data involved.Thus, exploitation of information leakage from the helper datato perform reliability-based attacks is excluded.Now without using the device nonce, we examine the meansof finding unreliable challenges by observing the PUF-FSMoutput rather than gaining information from the helper data. Byapplying arbitrarily challenges to the PUF-FSM and withoutpriori knowledge of a correct C set , there is no informationthat can be observed and used by the adversary to discoverthe unreliable challenges. This lies on the fact that the outputof the PUF-FSM is random or meaningless, if the enable signal S OE is locked/disabled. The complexity of unlocking the S OE without the participation of the trusted entity is same to thebrute-force attacks given in (4).We note that there still exists a potential way to determinean unreliable challenge through exhaustive search under theassumption that a priori C set has been eavesdropped and nowthe adversary is holding the physical PUF-FSM. The adversarychooses an unused challenge C x to replace one challenge C i in the eavesdropped C set to observe the output of thePUF-FSM. If C x is an unreliable challenge and its responsecontributes to the TR . Then under repeatedly evaluations, theadversary can determine such an unreliable challenge whenthe key and random output are iteratively exhibited. If C x isunreliable and its response contributes to the TR l . Then underrepeatedly evaluations, an unreliable challenge is determinedwhen two differing keys are iteratively exhibited. Throughcontinuous exhaustive searching, other unreliable challengescan be determined as well to perform reliability-based attacks.By employing the device nonce alike [14], no matter the C x is unreliable or not, due to the nonce being refreshed eachevaluation, observing the same key by repeatedly applying thesame challenge is infeasible. Thus, discovery of the unreliablechallenge is disabled. The reliability-based attack [12], [13]is, as a result, prevented.IV. A PPLICATIONS
A. Mutual Authentication
The PUF-FSM achieves mutual rather than common uni-directional authentication. Recall that only a trusted entityhas the capability of issuing a correct challenge sequence toactivate the S OE . As a consequence, only the PUF-FSM deviceand the trusted entity know the S secret .When the PUF-FSM is transfered to the user. The trustedentity issues a C set and sends them to the user may through XXX 5 insecure communication channels. The user presents the C set to the PUF-FSM and sends both the nonce and the PUF-FSM output (key) back to the trusted entity. The trusted entity computes a key, HASH ( R secret , nonce), and compares it withthe key received. If they are same, the user holding the PUF-FSM is authenticated. Once the user is authenticated, theuser applies the same C set again to the PUF-FSM to obtaina refreshed output (key). The user asks the refreshed keycomputed by the trusted entity after sending out the nonce. Thetrusted entity is authenticated by the user only if the receivedcomputed key is same to the key produced by the PUF-FSM. B. Key Exchange
Following the foregoing mutual authentication, we considerthe key exchange scenario between the user and the trustedentity. The user applies the same C set and sends the nonceto the trusted entity. But there is no key (shared key) sendingbetween two parties. Now only the user who holds the PUF-FSM and the trusted entity know the shared key. The userobtains it from the PUF-FSM, while the server computes it byhashing the R secret and the nonce. C. Controlled PUF
Served as a controlled PUF, intermediate benefits of thePUF-FSM are the exclusion of the on-chip ECC logic and theusage of helper data, which finally release the constraints ona practical realization of the controlled PUF. In addition, noECC and helper data eliminates potential security concernson previous controlled PUF designs from modeling attacks,where the helper data leaks information [13], [18], [19].
1) Key Obtain:
As an intentional design purpose, thecontrolled PUF restricts the means in which a PUF can beevaluated. Who holds the PUF-FSM is unable to evaluate it toobtain a (cid:104) C set , R secret (cid:105) — (cid:104) , (cid:105) means a tuple—without permis-sion from the trusted entity. To acquire a (cid:104) C set , R secret (cid:105) , first,the mutual authentication is performed to establish trustinessbetween the trusted entity and the user who needs to holdthe physical PUF-FSM. Then the trusted entity issues a freshset of challenges to the user who is now authorized with a (cid:104) C set , R secret (cid:105) .
2) Key Renewal:
Once the user is authorized with a (cid:104) C , R secret (cid:105) , (s)he is able to renew arbitrary keys from thePUF-FSM. The R secret can be treated as a master secret,where all other sub-keys, HASH ( R secret , nonce), are available.Given a known nonce, the user and the trusted entity canretrieve sub-keys or sub-session keys without issuing a newchallenge set. A shared key between two parties indeed enablea wide variety of standard cryptographic protocols to beimplemented [3]. V. C ONCLUSION
We have presented a practical controlled strong PUF, PUF-FSM, by (1) exploiting error-free responses in absence ofan APUF and (2) controlling the means of evaluating thePUF by using a control logic. The PUF-FSM requires neitheron-chip ECC nor helper data that were usually must when extracting a key. As a controlled PUF, it holds the promise ofa cost-effective way to increase resistance to various attacks,especially invasive attacks, for IoT devices. Security analysesdemonstrate that the PUF-FSM is resilient to modeling attacks.R
EFERENCES[1] G. E. Suh and S. Devadas, “Physical unclonable functions for deviceauthentication and secret key generation,” in
Proc. Design AutomationConf. (DAC) . ACM, 2007, pp. 9–14.[2] C. Herder, M.-D. Yu, F. Koushanfar, and S. Devadas, “Physical unclon-able functions and applications: A tutorial,”
Proc. IEEE , vol. 102, pp.1126–1141, 2014.[3] B. Gassend, M. V. Dijk, D. Clarke, E. Torlak, S. Devadas, and P. Tuyls,“Controlled physical random functions and applications,”
ACM Trans-actions on Information and System Security , vol. 10, no. 4, p. 3, 2008.[4] B. Gassend, D. Clarke, M. Van Dijk, and S. Devadas, “Silicon phys-ical random functions,” in
Proc. Conf. Computer and communicationssecurity . ACM, 2002, pp. 148–160.[5] U. Ruhrmair and M. Van Dijk, “PUFs in security protocols: Attackmodels and security evaluations,” in
IEEE Symposium on Security andPrivacy (
SP), 2013, pp. 286–300.[6] R. Pappu, B. Recht, J. Taylor, and N. Gershenfeld, “Physical one-wayfunctions,”
Science , vol. 297, no. 5589, pp. 2026–2030, 2002.[7] U. Ruhrmair, C. Jaeger, M. Bator, M. Stutzmann, P. Lugli, and G. Csaba,“Applications of high-capacity crossbar memories in cryptography,”
IEEE Trans. Nanotechnol. , vol. 10, no. 3, pp. 489–498, 2011.[8] A. Vijayakumar, V. C. Patil, C. B. Prado, and S. Kundu, “Machinelearning resistant strong puf: Possible or a pipe dream?” in
Int. Symp.Hardware Oriented Security and Trust (HOST) . IEEE, 2016, pp. 19–24.[9] U. Ruhrmair, J. Solter, F. Sehnke, X. Xu, A. Mahmoud, V. Stoyanova,G. Dror, J. Schmidhuber, W. Burleson, and S. Devadas, “PUF modelingattacks on simulated and silicon data,”
IEEE Trans. Inf. ForensicsSecurity , vol. 8, no. 11, pp. 1876–1891, 2013.[10] U. R¨uhrmair, F. Sehnke, J. S¨olter, G. Dror, S. Devadas, and J. Schmid-huber, “Modeling attacks on physical unclonable functions,” in
CCS ,2010, pp. 237–249.[11] M. Majzoobi, F. Koushanfar, and M. Potkonjak, “Testing tech-niques for hardware security,” in
Proc. Int. Test Conf. ITC , 2008,DOI:10.1109/TEST.2008.4700636.[12] G. T. Becker, “The gap between promise and reality: On the insecurityof XOR Arbiter PUFs,” in
Cryptographic Hardware and EmbeddedSystems (CHES) . Springer, 2015, pp. 535–555.[13] G. T. Becker, “On the pitfalls of using Arbiter-PUFs as building blocks,”
IEEE Trans. Comput.-Aided Design Integr. Circuits Syst , vol. 34, no. 8,pp. 1295–1307, 2015.[14] M.-D. Yu, M. Hiller, J. Delvaux, R. Sowell, S. Devadas, and I. Ver-bauwhede, “A lockdown technique to prevent machine learning onPUFs for lightweight authentication,”
IEEE Transactions on Multi-ScaleComputing Systems , 2016, DOI:10.1109/TMSCS.2016.2553027.[15] D. Lim, “Extracting secret keys from integrated circuits,” Ph.D. disser-tation, Massachusetts Institute of Technology, 2004.[16] X. Xu, W. Burleson, and D. E. Holcomb, “Using statistical modelsto improve the reliability of delay-based PUFs,” in
Proc. Symp. VLSI .IEEE, 2016, pp. 547–552.[17] B. Gassend, D. Clarke, M. Van Dijk, and S. Devadas, “Controlled phys-ical random functions,” in
Proc. Annual Computer Security ApplicationsConf.
IEEE, 2002, pp. 149–160.[18] G. T. Becker, R. Kumar et al. , “Active and passive side-channel attackson delay based PUF designs.”
IACR Cryptology ePrint Archive , vol.2014, p. 287, 2014.[19] J. Delvaux, D. Gu, D. Schellekens, and I. Verbauwhede, “Helper dataalgorithms for PUF-based key generation: Overview and analysis,”
IEEETrans. Comput.-Aided Design Integr. Circuits Syst. , vol. 34, no. 6, pp.889–902, 2015.[20] F. Koushanfar and G. Qu, “Hardware metering,” in
Proc. DesignAutomation Conf.
ACM, 2001, pp. 490–493.[21] F. Koushanfar, “Provably secure active ic metering techniques for piracyavoidance and digital rights management,”
IEEE Trans. Inf. ForensicsSecurity , vol. 7, no. 1, pp. 51–63, 2012.[22] J. Zhang, Y. Lin, Y. Lyu, and G. Qu, “A PUF-FSM binding schemefor FPGA IP protection and pay-per-device licensing,”
IEEE Trans. Inf.Forensics Security , vol. 10, no. 6, pp. 1137–1150, 2015.[23] M. Majzoobi, F. Koushanfar, and Potkonjak, “Lightweight secure PUFs,”in