On Entropy and Bit Patterns of Ring Oscillator Jitter
OOn Entropy and Bit Patterns of Ring Oscillator Jitter
Markku-Juhani O. Saarinen
PQShield Ltd.Prama House, 267 Banbury Rd., Oxford OX2 7HT, UKEmail: [email protected]
Abstract —Thermal jitter (phase noise) from a free-runningring oscillator is a common, easily implementable physical ran-domness source in True Random Number Generators (TRNGs).We show how to evaluate entropy, autocorrelation, and bitpattern distributions of such entropy sources, even when theyhave low jitter levels or some bias. Our numerical evaluationalgorithms vastly outperform simple Monte Carlo simulationsin speed and accuracy. This helps in choosing the most appro-priate parameters for TRNG self-tests and cryptographic post-processing. We also propose a new, safer lower bound estimationformula for the entropy of such randomness sources.
I. I
NTRODUCTION : R
ING O SCILLATOR J ITTER
Free-running (ring) oscillators are widely used as physicalnoise sources in True Random Number Generators (TRNGs).In many ways, these designs are direct descendants of theoscillator-based “electronic roulette wheel” used to generatethe RAND tables of random digits in the late 1940s [1].A typical design (Fig. 1) has two oscillators; an unsynchro-nized ring oscillator and a reference oscillator that is used tosample bits from the free-running oscillator. Spontaneous andnaturally occurring phase shifts between the oscillators willcause unpredictability of output bits. These random oscillatorperiod variations are known as oscillator jitter [2].A pioneering Ring Oscillator RNG chip was described andpatented in 1984 by Bell Labs researchers [3], [4]. Thistype of noise source can be realized with “standard cells”in HDL and requires no special manufacturing processes,making it a popular choice. More modern versions are usedas noise sources for cryptographic key generation in commonmicrochips from AMD [5] and ARM [6].Physical entropy sources are regulated in cryptographicsecurity standards such as NIST’s SP 800-90B [7] (for FIPS140-3) and BSI’s AIS 31 [8] (for Common Criteria). Thesemandate health monitoring (built-in statistical tests) and appro-priate post-processing. Cryptographic post-processing methodssuch as the SHA2 hash [9] completely mask statistical defectswhile still allowing guessing attacks. Noise source entropyevaluation is therefore crucial for determining the samplingrate and “compression ratio” of the conditioner.
A. Physical Models and Their Limits
An important contributor to the randomness of jitter ina ring-oscillator inverter loop (Fig. 1) is Johnson-Nyquistthermal noise [10], [11], which occurs spontaneously in anyconductor (regardless of quality) as a result of thermal agita-tion of free electrons. Jitter is a macroscopic manifestation ofthis quantum-level [12] Brownian effect. Timing jitter is a relatively well-understood phenomenonfor many reasons. It is an important limiting factor to thesynchronous operating frequency of any digital circuit.An example of a detailed physical model for ring oscillatorphase noise and jitter is provided by Hajimiri et al [13]–[15],which we recap here. The randomness of timing jitter hasa strongly Gaussian character. The jitter accumulates in thephase difference against the reference clock, with variance σ t growing almost linearly from one cycle to the next.Under common conditions, the transition length standarddeviation (uncertainty) σ t after time t can be estimated forCMOS ring oscillators as (after [14, Eqns. 2.6,5.18]): σ t = κ t ≈ η · kTP · V DD V char · t (1)In this derivation of physical jitter κ we note especiallythe Boltzmann constant k and absolute temperature T ; othervariables include power dissipation P , supply voltage V DD ,device characteristic voltage V char , and a proportionality con-stant η ≈ . The number of stages ( N ) and frequency f affectpower P via common dynamic (switching) power equations.As noted in [14, Sect. 5.2.1], such derived models onlyexpress “inevitable noise sources” – not “extra disturbance,such as substrate and supply noise, or noise contributedby extra circuitry or asymmetry in the waveform” – whichwill increase jitter. Many of these factors are difficult tomodel individually or are beyond digital designers’ control.In practice κ is measured experimentally, and existence ofjitter (and hence, fresh thermal noise entropy) is continuouslymonitored by auxiliary circuits that are a part of the TRNG. QD Q
Latch(D-FF)(external xtal or system) CLKEN “raw”entropybits z i σ = time Fig. 1. A ring oscillator consists of an odd (here N = 3 ) number ofinverters connected into a free-running loop. The output is sampled usingan independent reference clock, such as a crystal oscillator. Transition timesare affected by jitter (largely from Johnson-Nyquist thermal noise), whoseaccumulation causes samples to become increasingly unpredictable. a r X i v : . [ c s . CR ] F e b . From Statistical Random Tests to Entropy Evaluation A 1948 report by RAND [16] describes the statistical testsperformed on the output of the “million digit” oscillator device[17]. The tests were based on work by Kendall and Smith [18],[19] with their late 1930s electromechanical random numberdevice: Frequency test, Serial test, Poker test, and Gap test. Itis remarkable that versions of these tests remained in use until2000s in the FIPS 140-2 cryptographic standard [20].While such “black box” statistical tests suites – includingMarsaglia’s DIEHARD and its successors [21], [22] andNIST SP 800-22 [23] — may be useful when evaluatingpseudorandom generators for Monte Carlo simulations, theyare poorly suited for security applications. It is illustrative thata test existed in NIST SP 800-22 even in 2010 to see if anLFSR is “long enough” to be “considered random” [23, Sect.2.10]. Elementary cryptanalysis with finite field linear algebrashows that the internal state of an LFSR can be derived from asmall amount of output, allowing both future and past outputsto be reproduced with little effort – a devastating scenario ifthat output is to be used for cryptographic keying.By 2001 at least the German AIS 20/31 [8], [24] CommonCriteria IT Security evaluations had diverged from the purelyblack-box statistical approach and instead concentrated onquantifying entropy produced by a noise source, evaluationof its post-processing methods, and also considered imple-mentation security, cryptanalytic attacks, and vulnerabilities.Current NIST security evaluation methodology of physicalnoise sources [7] also acknowledges that general statisticalproperties of raw noise are less important than evaluation ofits entropy content, but at the time of writing do not requirestochastic models or detailed analysis of physical sources.For purposes of security engineering, pseudorandomness inthe output of the physical source is an unambiguously negativefeature as it makes the assessment of true entropy moredifficult. On the other hand, Redundancy from a well-behavedstochastic model is easily manageable via cryptographic post-processing. Once seeded, standard (Cryptographic) Determin-istic Random Bit Generators (DRBGs [25]) guarantee indistin-guishability from random, in addition to providing predictionand backtracking resistance.
C. Ring Oscillators as Wiener Processes
Pioneering work on modern Physical RNG Entropy Esti-mation was presented by Killmann and Schindler [26], whosestochastic model uses independent and identically distributedtransition times (half-periods) to model jitter. Baudet et al. [27]take a frequency domain (phase noise) approach. Our modelbroadly follows these and also the one by Ma et al. [28].Baudet et al. propose a Shannon entropy lower bound [27,Eqn. 14], which has been used in engineering (e.g. [29]): H ≥ − π ln 2 e − π Q + O (cid:0) e − π Q (cid:1) . (2)Here Q = σ ∆ t (“quality factor”) corresponds to κ in thephysical model (Eqn. 1). We observe that the bound of Eqn.2 is never lower than 0.415 even when Q approaches zero –this estimate holds only under some additional assumptions. D. Our Goals: FIPS 140-3 and More Generic ROs
Prior works generally state that the frequency of the free-running oscillator is much higher than sampling frequency,and that they do not have a harmonic relationship. The sourceis also often taken to be unbiased and assumed to have arelatively high amount of entropy per sample. In this work, weshow how to compute entropy, autocorrelation coefficients, andbit pattern probabilities also for less ideal parameters. Our goalis to have guarantees for entropy and min-entropy in TRNGdesigns as this is required in current cryptography standardsAIS 31 [8] and FIPS 140-3 [30] / NIST SP 800-90B [7].II. A S
TOCHASTIC M ODEL AND ITS D ISTRIBUTIONS
We consider the jitter accumulation σ ∼ Q at sampletime rather than the variance of (half) periods [28, Sect.2.2]. We also ease analysis by using the sampling periodas a unit of time – sample z i is at “time” i and varianceis defined accordingly. Our time-phase accumulation matcheswith the physical model ( κ of Eqn. 1) and also accounts forspontaneous, purely Brownian transitions and ripple when therelative frequency F of oscillators is very small or harmonic.For sampled, digital oscillator sources, we may ignore thesignal amplitude and consider a pulse wave with period T andrelative pulse width (“duty cycle”) D . We assume a constantsampling rate and use the sample bits as a measure of time.We normalize the sinusoidal phase ω as x = ω − δ π to range ≤ x < , where δ is the rising edge location. Averagefrequency F ≈ /T mod 1 is a per-bit increment to the phaseand σ is its per-bit accumulated variance (Eqn. 1). Definition 1 (Sampling Process):
The behavior of a ( F, D, σ ) noise source and its bit sampler is modeled as: x i = (cid:0) x i − + N ( F, σ ) (cid:1) mod 1 (3) z i = (cid:40) if x i < D, if x i ≥ D. (4)Here z i ∈ { , } is an output bit, and x i ∈ [0 , is thenormalized phase at sampling time. F is the frequency inrelation to the sampling frequency, and σ represents jitter.Due to normalization ( x mod 1 ≡ x − (cid:98) x (cid:99) ), and negative − F symmetry, F can be reduced to range [0 , / ] . One mayview this as a “harmonic” reduction but there is no restrictionfor sampler to run faster than the source oscillator.The sampling process can be easily implemented to generatesimulated bits z , z , z , .. for given parameters ( F, D, σ ) .This Wiener process is clearly only an idealized stochasticmodel, and its applicability for modeling specific physicalrandom number generators must be individually evaluated. A. Distance to Uniform
The Gaussian probability density function in Eqn. 3 be-comes modularly wrapped (Fig. 2.) The classical assumptionof ring oscillators is that if the accumulated variance σ is large enough in relation to sampling rate, the modularstep density function will become essentially “flat” in [0 , ;furthermore, if ( x i − x i − ) mod 1 is uniformly random, then Phase x i when previous x i = 0, F = 12.3, D = 0.5, = 0.16 Modular transition density f s in 0 x < 1; F mod 1 = 0.3, = 0.16 Fig. 2. Gaussian phase transition and equivalent modular density. the bit sequence z i is correlation-free. Some sources simplystate ad hoc criteria for decorrelation (e.g. that σ > ).We will calculate the step function’s statistical distance tothe uniform distribution. The density of the unbounded stepfunction (Eqn. 3) can be equivalentl y defined over domain ≤ x < or as a 1-periodic function in R / Z (See Fig. 2): f s ( x ) = 1 √ πσ (cid:88) i ∈ Z e − ( x − F + i )22 σ . (5)We have f s ( a ) = f s ( a + 1) and (cid:82) a +1 a f s ( x ) d x = 1 for all a ∈ R . By choosing a tailcut value τ one can limit the sum to (cid:98)− τ σ (cid:99) ≤ i ≤ (cid:100) τ σ (cid:101) . This allows us to determine max at f s ( F ) and min at f s ( F + / ) for given σ . These are bounds for itsstatistical (total variation) distance to the uniform distribution(See Table I.) We see that this idealized “1-dimensional latticeGaussian” would be cryptographically uniform at σ > . TABLE IE
XTREMA OF THE PROBABILITY DENSITY FUNCTION f s ( x ) FOR SOME σ . σ f s min f s max σ f s range0.10 . . ± − . . . ± − . . . ± − . . . ± − . . . ± − . B. Autocorrelation and Sampling Intervals
We define a scaled, binary delay- k autocorrelation measure − ≤ C k ≤ +1 : C k = 2 Pr( z i = z i + k ) − . (6)We may estimate C k , k ≥ for a finite m -bit sequence as C (cid:48) k = 1 m − k m − k (cid:88) i =1 (2 z i − z i + k − . (7)For convenience, we set C (cid:48) = m (cid:80) mi =1 (2 z i − to representsimple bias in the same vector; C (cid:48) approximates D − . Theorem 1:
With fixed D and σ > or F (cid:54)∈ Q we have C k ( F, D, σ ) = C ( kF mod 1 , D, kσ ) for k ≥ . (8) Proof 1:
The variance of independent random variables isadditive by induction in k , as is the mean. The difference x k − x will then have the distribution N ( kF, kσ ) mod 1 .Only with either noisy or non-rational (non-harmonic) F wemay take x in Equation 3 to be uniformly distributed in [0 , . C. Computing C k to High Precision Without Simulation Let p , p , p , p be frequencies of adjacent bit pairs p ( z i ,z i +1 ) present in bit sequence z i (Eqn. 4) in the model.We’ll pick one, p = Pr( z i = 1 and z i +1 = 1) . Thecondition z i = 1 limits the density of x i to “boxcar” g : g ( x ) = (cid:40) if x ∈ [0 , D )0 if x / ∈ [0 , D ) . (9)We also define g ( x ) = 1 if x ∈ [ D, and zero elsewhere.The addition of random variables corresponds to convolu-tion of their density functions; convolution f = g ∗ f s withthe step function f s (Eqn. 5) yields the probability densityof x i +1 conditioned on x i = 1 . The probability mass of thesecond bit z i +1 = 1 is in range x i +1 ∈ [0 , D ) and we have p = (cid:90) D f ( x ) d x. (10)Convolution f = g ∗ f s density can be expressed as f ( x ) = 12 (cid:88) i ∈ Z (cid:2) erf( a i ) − erf( b i ) (cid:3) (11)Where a i = ( x + i − F ) / √ σ and b i = ( x + i − F − D ) / √ σ .An indefinite integral S = (cid:82) f ( x ) d x with the same a i , b i is S ( x ) = √ σ (cid:88) i ∈ Z (cid:34) a i erf( a i ) − b i erf( b i ) + e − a i − e − b i √ π (cid:35) . (12)Again, one can choose a tailcut bound τ for desired precision (cid:15) ≈ erfc( τ / √ (via Gaussian CDF) and compute the sumsjust over the integer range (cid:98)− τ σ (cid:99) ≤ i ≤ (cid:100) τ σ (cid:101) . A typicalchoice for IEEE floating point is τ = 10 (“ten sigma”).Choosing p = (cid:82) D f ( x ) d x = S ( D ) − S (0) has somecomputational advantages. From p we can derive otherparameters p = p = D − p , p = 1 − D + p ,and C = 4( p − D ) + 1 . To compute arbitrary C k , substituteparameters F (cid:48) = kF mod 1 and σ (cid:48) = kσ (Thm. 1). We thenhave C k as C (cid:48) = 4[ S ( D ) − S (0) − D ] + 1 .Figure 3 shows the density functions g for the four bit pairfrequencies when F = 0 . , D = 0 . , σ = 0 . ( σ = 0 . ).The dotted line on upper boxes corresponds to shape of f and lower row to f ; these have been chopped (shaded area)to g , g , g , g . Note that even though g has a differentshape from g , they have equivalent area and hence frequency p = p = − C . This is natural since the frequency ofrising edges must match the frequency of falling edges. .0 0.2 0.4 0.6 0.8 1.00.00.20.40.60.81.0 p = 0.204492 p = 0.170508 p = 0.170508 p = 0.454492 Fig. 3. Bit pair probabilities for F = 0 . , D = 0 . , σ = 0 . . D. Use of C k in Modeling of Physical Sources The output from bit generation simulations agrees with theseexplicit autocorrelation values as expected. Analytic C k is ofcourse much faster to compute.Since autocorrelation estimates C (cid:48) k (Eqn. 7) may also beeasily derived from the output of physical ring oscillators,we can find a good approximate ( F, D, σ ) model for aphysical source by matching their autocorrelation properties.We use least-squares minimization of few initial entries ofautocorrelation vectors for this type of modeling.We can also experimentally derive parametrized modelswhere frequency F and jitter σ are functions of environmentalaspects such as temperature or aging of circuitry; this, inturn, allows us to extrapolate and assign safe bounds forstatistical health tests parameters and entropy output (yieldof conditioner) over the lifetime of the device. E. Bit Pattern Probabilities via FFT Convolutions
To compute probabilities of bit triplets and beyond, we may“chop” a density function to zero the part which we know tobe conditioned out; g ( x ) = ( f · g )( x ) , is f chopped tozero outside [ D, range so we have (cid:82) ∞−∞ g ( x ) d x = p .Let z be a sequence of bits for which conditional distribution f z is known; “chopping” with g or g and convoluting withstep function f s we obtain distributions of one additionalbit: f z, = ( f z · g ) ∗ f s and f z, = ( f z · g ) ∗ f s . Thischop-and-convolute process can be continued to determine theprobability and phase distribution of an arbitrary bit pattern.Direct probability distribution integration formulas such asEqn. 12 become cumbersome for more generic bit patterns.We instead perform numeric computations on probabilitydensity functions f z represented as real-valued polynomialcoefficients. This approach is attractive since the Fast FourierTransform offers an especially efficient way to compute cyclic convolutions of polynomials, as is required by our 1-periodicstep function f s (Eqn. 5). Unlike unbounded Gaussians ourrandom variables x i ∈ [0 , have a strictly limited range. Algorithm 1
Evaluate bit pattern probability p z function PZFFT ( F , D , σ , z z · · · z n ) for j ← , , · · · , m − do (cid:46) Init: Approximation. s j ← m f s ( jm ) (cid:46) Eqn. 5 for F and σ . g ,j ← max(min( mD − j, , (cid:46) Eqn. 9 for D . g ,j ← − g ,j (cid:46) Select zero – inverse. v j ← m (cid:46) Start with uniform distribution. end for for i ← , , · · · , n do (cid:46) For each bit. for j ← , , · · · , m − do (cid:46) Chop half. t j ← v j g z i ,j (cid:46) Note bit select index z i . end for v ← t ∗ s mod ( x m − (cid:46) Convolution (FFT). end for return p z = (cid:80) m − i =0 v i (cid:46) Probability mass. end function
These probability density functions f ( x ) correspond to real-valued m -degree polynomials v = (cid:80) mi =0 x i v i in Algorithm 1.The unit interval domain x ∈ [0 , is mapped to coefficientsvia v i ≈ (cid:82) ( i +1) /mi/m f ( x ) d x . For the step function of weapproximate this as s i = m f s ( i/m ) and for chop functions sothat (cid:80) i g ,i = D and (cid:80) i g ,i = 1 − D . We write the circularconvolution using polynomial product and reduction modulo x m − , which can be very efficiently computed with FFT.The chopping operation is a point-by-point multiplicationwith g or g in the normal (time) domain, while stepconvolution is a point-by-point multiplication with ˆ f s in thetransformed (complex, frequency) domain, and hence each ad-ditional bit z i requires one forward and one inverse transformas ˆ f s remains the same. Our open-source, FFTW3-based [31]portable C implementation allows fast and accurate computa-tion of probabilities of almost arbitrarily long patterns .III. E NTROPY E VALUATION
Let Z n be a random variable representing n -bit sequences z = ( z , z , ..z n ) which are sequentially generated by thestochastic process of Sect. II characterized by stationary pa-rameters ( F, D, σ ) . Each of n possible outcomes z can beassigned a probability p z = Pr( Z n = z ) .The NIST SP 800-90B [7] entropy source standard focuseson min-entropy H ∞ , a member of the Rényi family ofentropies [32]. Min-entropy (or “worst-case entropy”) has asimple definition in case of a discrete variable, based on thelikelihood of the most likely outcome of Z n : H ∞ ( Z n ) = min z ( − log p z ) = − log (max z p z ) (13)The AIS 31 [8] Common Criteria evaluation method addition-ally uses the traditional Shannon entropy metric H ( Z n ) = − (cid:88) z p z log p z . (14)For Shannon entropy we consider its entropy rate H ( Z ) .This is a [0 , -valued limit H ( Z ) = lim n →∞ n H ( Z n ) . Reference source code: https://github.com/mjosaarinen/bitpat . Entropy Upper Bounds
Probabilities p z obtained via Algorithm 1 and related tech-niques in Sect. II-E can be substituted to Eqns. 13 and 14 toevaluate H ∞ ( Z n ) and H ( Z n ) , respectively.Shannon entropy H ( Z n ) provides increasingly accurateupper bounds since we have H ∞ ( Z n ) ≤ H ( Z n ) and H ( Z ) ≤ . . . ≤ H ( Z ) ≤ H ( Z ) ≤ H ( Z ) . (15)This relationship follows from subadditivity of joint entropyin case of Shannon entropy H ; the monotonic relationship ofEqn. 15 does not hold for min-entropy H ∞ .All Rényi entropies are upper bounded by max-entropy(Hartley entropy) H , i.e. the number (cardinality) of n -bit z with nonzero probability; H ( Z n ) = log | p z > | . If an m -bit encoding exists for all elements with p z > of Z n ,then its cardinality is at most m and H ( Z ) ≤ H ( Z n ) ≤ m .This leads to limit H ( Z ) → for a noiseless ( σ = 0 )source, regardless of F oscillation. A simple (cid:15), δ argumentshows that each n -bit sequence z i with σ = 0 can be encodedby expressing F, D , and x with asymptotic O (log n ) bits ofprecision. Claim follows from lim n →∞ n log n → . B. Entropy Lower Bounds as a function of σ For an entropy lower bound we consider the entropy con-tribution of jitter to an individual bit z i when all of theparameters ( F, D, σ ) and the previous phase x i − are known(in addition to previous bit z i − ). Let p e = Pr( z i = z (cid:48) i ) wherethe expected bit value z (cid:48) i is deterministic (from x i − + F ).We observe that F cancels out in this case and we have p e = p + p with F = 0 for equations of Section II-C. Incase of an unbiased source D = , a further simplificationyields frequency-independent bounds as a function of σ : p e = 2 · [ S ( / ) − S (0)] = 4 · S ( / ) (16) H ( Z ) ≥ − p e log p e − (1 − p e ) log (1 − p e ) (17) H ∞ ( Z ) ∼ − log p e . (18)where S is Eqn. 12 with F = 0 , D = . From Eqn. 16 wecan show a looser approximate bound p e ≤ − tanh( πσ ) .These estimates are lower than some previously proposedlower bounds (See Eqn. 2) as they are based on fewerassumptions. Crucially they cover the entire range of σ –and are therefore safer to use in cryptographic engineering. C. Min-Entropy Estimates
One part of min-entropy estimation of H ∞ ( Z n ) is to finda maximum-likelihood bit sequence z , and the second is todetermine its probability p z . The second part can be accom-plished with Algorithm 1 – we have H ∞ ( Z n ) = − n log p z .A reasonable z string “guess” is to select x at randomand use the peak probability path x i = x i − + F (mod 1) to determine z , z , · · · , z n . This approach is asymptoticallysound, but overestimates entropy for small n .A practical depth-first approach is to proceed as in Alg. 1 butevaluate weights q = (cid:80) m − j =0 v j g ,j and q = (cid:80) m − j =0 v j g ,j at each step i , and select z i with the higher probability mass. H : P e r - b i t m i n - e n t r o p y . Z : From 100-bit depth-first max p z .NIST SP-90B: H original (simulated).Lower estimate from p e = 4 S (1/2). Fig. 4. Min-entropy from distribution of Z with depth-first z selection,NIST SP 800-90B estimates, and our p e bit-prediction lower bound. Experi-mental data is represented as a scatter plot, with a line at the average. While max p z can usually be found with subexponential z guesses, worst-case complexity of this problem remains open.Certainly, a simple depth-first search will not always work.Consider max p z for source ( D = , F = 0 . , σ = 0 . : H ∞ ( Z ) = 0 . , p = p = 0 . . H ∞ ( Z ) = 0 . , p = p = 0 . . H ∞ ( Z ) = 0 . , p = p = p = p = 0 . .We first note that the entropy increase from Z violatessubadditivity (and would not be possible for H ; Eqn. 15).Furthermore, the maximum-probability bit strings of Z arenot substrings of those for Z ; not reachable via iteration. D. Comparison to SP 800-90B Estimation
Current SP 800-90B min-entropy evaluation methodology[7, Section 6.3] used by FIPS 140-3 [30] does not usestochastic models, but proceeds by taking the minimum of tenconservative, standardized entropy tests. The results of thismethodology plateau below H ∞ ≈ . even for completelyrandom (cryptographically generated) test data.We generated 16,000 simulated sequences of ∗ bitswith random σ and F ∈ [0 , / ] , and subjected them to theofficial NIST SP 800-90 Entropy Assessment . Fig. 4 contraststhese results with min-entropy estimate for Z where z ischosen to follow maximum probability mass (Section III-C).As expected, the black-box heuristic which has been de-signed to “lean toward a conservative underestimate of min-entropy” [7, Sect G.2] reports less entropy than our estimates.Fig. 4 also shows H ∞ ( Z ) ∼ − log p e min-entropy derivedfrom the bit-prediction bound of Eqns. 16 and 18. Thiscurve mostly traces the lower reaches of the stochastic modelestimates (which are scattered here due to randomness of F ).We suggest that this is a safe min-entropy engineering estimatefrom variance σ , assuming an unbiased source ( D = / ). NIST: https://github.com/usnistgov/SP800-90B_EntropyAssessment
CKNOWLEDGMENTS
Thanks to Joshua E. Hill for helpful comments. This workhas been supported by Innovate UK Research Grant 105747.R
The Designer’s Guideto Jitter in Ring Oscillators . Springer, 2009. doi:10.1007/978-0-387-76528-0 .[3] Robert C. Fairfield, Robert L. Mortenson, and Kenneth B. Coulthart.An LSI random number generator (RNG). In
Advances in Cryptol-ogy, Proceedings of CRYPTO ’84, Santa Barbara, California, USA,August 19-22, 1984, Proceedings , volume 196 of
Lecture Notes inComputer Science , pages 203–230. Springer, 1984. doi:10.1007/3-540-39568-7_18 doi:10.6028/NIST.SP.800-90B doi:10.6028/NIST.FIPS.180-4 .[10] John Bertrand Johnson. Thermal agitation of electricity in conductors.
Phys. Rev. , 32(1):97–109, July 1928. doi:10.1103/PhysRev.32.97 .[11] Harry Nyquist. Thermal agitation of electric charge in conductors.
Phys.Rev. , 32(1):110–113, July 1928. doi:10.1103/PhysRev.32.110 .[12] Herbert B. Callen and Theodore A. Welton. Irreversibility and gen-eralized noise.
Phys. Rev. , 83(1):34–40, July 1951. doi:10.1103/PhysRev.83.34 .[13] Ali Hajimiri and Thomas H. Lee. A general theory of phase noise inelectrical oscillators.
IEEE Journal of Solid-State Circuits , 33(2):179–194, 1998. doi:10.1109/4.658619 .[14] Ali Hajimiri and Thomas H. Lee.
The Design of Low Noise Oscillators .Kluwer, 1999. doi:10.1007/b101822 .[15] Ali Hajimiri, Sotirios Limotyrakis, and Thomas H. Lee. Jitter andphase noise in ring oscillators.
IEEE Journal of Solid-State Circuits ,34(6):790–804, June 1999. URL: https://authors.library.caltech.edu/4916/1/HAJieeejssc99a.pdf, doi:10.1109/4.766813
A Million Random Digits with 100,000 Normal De-viates
Journal of the Royal Statistical Society ,101(1):147–166, 1938. doi:10.2307/2980655 .[19] Maurice G. Kendall and Bernard Babington-Smith. Second paper onrandom sampling numbers.
Supplement to the Journal of the RoyalStatistical Society , 6(1):51–61, 1939. doi:10.2307/2983623 .[20] NIST. Security requirements for cryptographic modules. FederalInformation Processing Standards Publication FIPS 140-2 (With changenotices dated October 10, 2001 and December 3, 2002), May 2001. doi:10.6028/NIST.FIPS.140-2 . [21] George Marsaglia. The Marsaglia random number CDROM includingthe diehard battery of tests of randomness. CDROM and OnlinePublication, 1995.[22] Robert G. Brown, Dirk Eddelbuettel, and David Bauer. Dieharder: Arandom number test suite. Software distribution, accessed January 2021,2003. URL: https://webhome.phy.duke.edu/~rgb/General/dieharder.php.[23] Andrew Rukhin, Juan Soto, James Nechvatal, Miles Smid, ElaineBarker, Stefan Leigh, Mark Levenson, Mark Vangel, David Banks, AlanHeckert, JamesDray, and San Vo. A statistical test suite for random andpseudorandom number generators for cryptographic applications, April2010. doi:10.6028/NIST.SP.800-22r1a .[24] Werner Schindler and Wolfgang Killmann. Evaluation criteria for true(physical) random number generators used in cryptographic applications.In Burton S. Kaliski Jr., Çetin Kaya Koç, and Christof Paar, editors,
Cryptographic Hardware and Embedded Systems - CHES 2002, 4thInternational Workshop, Redwood Shores, CA, USA, August 13-15, 2002,Revised Papers , volume 2523 of
Lecture Notes in Computer Science ,pages 431–449. Springer, 2002. doi:10.1007/3-540-36400-5\_31 .[25] Elaine Barker and John Kelsey. Recommendation for random numbergeneration using deterministic random bit generators. NIST SpecialPublication SP 800-90A Revision 1, June 2015. doi:10.6028/NIST.SP.800-90Ar1 .[26] Wolfgang Killmann and Werner Schindler. A design for a physicalRNG with robust entropy estimators. In Elisabeth Oswald and PankajRohatgi, editors,
Cryptographic Hardware and Embedded Systems -CHES 2008, 10th International Workshop, Washington, D.C., USA,August 10-13, 2008. Proceedings , volume 5154 of
Lecture Notes inComputer Science , pages 146–163. Springer, 2008. doi:10.1007/978-3-540-85053-3\_10 .[27] Mathieu Baudet, David Lubicz, Julien Micolod, and André Tassiaux. Onthe security of oscillator-based random number generators.
J. Cryptol-ogy , 24(2):398–425, 2011. doi:10.1007/s00145-010-9089-3 .[28] Yuan Ma, Jingqiang Lin, Tianyu Chen, Changwei Xu, Zongbin Liu,and Jiwu Jing. Entropy evaluation for oscillator-based true randomnumber generators. In Lejla Batina and Matthew Robshaw, editors,
Cryptographic Hardware and Embedded Systems - CHES 2014 - 16thInternational Workshop, Busan, South Korea, September 23-26, 2014.Proceedings , volume 8731 of
Lecture Notes in Computer Science , pages544–561. Springer, 2014. doi:10.1007/978-3-662-44709-3\_30 .[29] Oto Petura, Ugo Mureddu, Nathalie Bochard, Viktor Fischer, and LilianBossuet. A survey of AIS-20/31 compliant TRNG cores suitablefor FPGA devices. In Paolo Ienne, Walid A. Najjar, Jason HelgeAnderson, Philip Brisk, and Walter Stechele, editors, , pages 1–10.IEEE, 2016. URL: https://ieeexplore.ieee.org/xpl/conhome/7573873/proceeding, doi:10.1109/FPL.2016.7577379 .[30] NIST and CCCS. Implementation guidance for FIPS 140-3 and the cryptographic module validation program. CMVP,September 2020. URL: https://csrc.nist.gov/CSRC/media/Projects/cryptographic-module-validation-program/documents/fips%20140-3/FIPS%20140-3%20IG.pdf.[31] Matteo Frigo and Steven G. Johnson. The design and implementation ofFFTW3.
Proc. IEEE , 93(2):216–231, 2005. Special issue on “ProgramGeneration, Optimization, and Platform Adaptation”. doi:10.1109/JPROC.2004.840301 .[32] Alfréd Rényi. On measures of entropy and information. In JerzyNeyman, editor,