An Investigation on Inherent Robustness of Posit Data Representation
Ihsen Alouani, Anouar Ben Khalifa, Farhad Merchant, Rainer Leupers
TTO APPEAR IN VLSID 2021
An Investigation on Inherent Robustness of PositData Representation
Ihsen Alouani ∗ , Anouar BEN KHALIFA † , Farhad Merchant ‡ , Rainer Leupers ‡∗ IEMN Lab CNRS UMR 8520, INSA Hauts-De-France † National Engineering School of Sousse, University of Sousse, Tunisia ‡ Institute for Communication Technologies and Embedded Systems, RWTH Aachen University, [email protected], [email protected] { farhad.merchant, leupers } @ice.rwth-aachen.de Abstract —As the dimensions and operating voltages of com-puter electronics shrink to cope with consumers’ demand forhigher performance and lower power consumption, circuit sensi-tivity to soft errors increases dramatically. Recently, a new data-type is proposed in the literature called posit data type. Positarithmetic has absolute advantages such as higher numericalaccuracy, speed, and simpler hardware design than IEEE 754-2008 technical standard-compliant arithmetic. In this paper, wepropose a comparative robustness study between 32-bit positand 32-bit IEEE 754-2008 compliant representations. At first, wepropose a theoretical analysis for IEEE 754 compliant numbersand posit numbers for single bit flip and double bit flips. Then,we conduct exhaustive fault injection experiments that show aconsiderable inherent resilience in posit format compared toclassical IEEE 754 compliant representation. To show a relevantuse-case of fault-tolerant applications, we perform experimentson a set of machine-learning applications. In more than ofthe exhaustive fault injection exploration, posit representation isless impacted by faults than the IEEE 754 compliant floating-point representation. Moreover, in of the tested machine-learning applications, the accuracy of posit-implemented systemsis higher than the classical floating-point-based ones.
Index Terms —Computer Arithmetic, Posit Arithmetic, Ma-chine Learning, Reliability
I. I
NTRODUCTION
As sub-micron technology dimensions sharply decrease toa few nanometer ranges in commercialized integrated circuits,the sensitivity of electronic circuits increases drastically [1].Hence, embedded microprocessors are becoming more vul-nerable to soft errors, and designing dependable systems is achallenging task for chip designers. In fact, these systems haveto operate reliably even in the presence of faults, to sustainthe present growth rate of device count and clock frequencywith continuously growing reliability issues. Moreover, thesensitivity of chips is also intensified by voltage scaling [2].Undesirable and accidental faults become more frequent innew generation computing systems where the systems arerunning heavy-duty numerical computations. A single-eventupset (SEU) or multi-bit upset (MBU) could bring a catas-trophic outcome in mission-critical applications such as spaceand missile-navigation applications. Hence, having reliablearithmetic that is resilient to errors is a primary requirementfor mission-critical computing systems.
Posit is a new data type that is capable of storing moreinformation-per-bit compared to its IEEE 754 compliant coun- terparts [3]. For example, a 32-bit posit number can havea similar dynamic range and better accuracy at the sametime compared to the 32-bit IEEE 754 compliant number. Ingeneral, m-bit posit has a higher dynamic range and betternumerical accuracy properties compared to n-bit IEEE 754compliant number where m = n . It is shown in the litera-ture that for computing systems, n -bit IEEE 754 compliantnumbers can be replaced by m -bit posit numbers where m < n since the posit number system exhibits a trade-offbetween accuracy and dynamic range [4] [5]. These trade-offsallow the selection of the desired posit format that is suitablefor computing systems without compromising accuracy andperformance [6] [7]. Further details of the number system andthe formats are discussed in Section II-A. Reliability aspectsof posit arithmetic are yet to be explored by the researchcommunity.To the best of our knowledge, this is the first comparativestudy on the inherent fault tolerance of posit arithmetic vis-`a-vis its IEEE counterpart. We carry out an extensive investiga-tion of reliability through an exhaustive fault injection scheme.The major contributions of the paper are as follows: • We propose a theoretical analysis and an exhaustivereliability exploration of posit arithmetic vis-`a-vis itsIEEE-754 compliant counterparts. • We conduct exhaustive reliability exploration as well asmachine-learning (ML) benchmarks under fault injection. • We show promising results in posit arithmetic that mayencourage its utilization in safety-critical applications, aswell as approximate computing.For the reproducibility, we make our framework opensource [8]. The rest of the paper is organized as follows: InSection II we present a background on posit arithmetic andsoft errors followed by related work in Section III. Section IVdescribes the analysis and the proposed methodology for errorresilience using posit arithmetic. In section V, experimentalsetup and results are discussed. We summarize our work inSection VI.II. B
ACKGROUND AND R ELATED W ORKS
A. IEEE 754 Compliant and Posit Number Systems
The IEEE 754-2008 compliant floating-point format binarynumbers are composed of three parts: a sign, an exponent and a r X i v : . [ c s . A R ] J a n ig. 1. Description of the IEEE 754 single-precision floating-point and posit formats a fraction part (see Fig. 1). The sign is the most significantbit indicating whether the number is positive or negative. Ina single-precision format, the following 8 bits represent theexponent of the binary number ranging from − to 127. Theremaining 23 bits represent the fractional part. The normalizedformat of floating-point numbers is: val = ( − sign × exp − bias × (1 .f raction ) (1) Posit arithmetic is proposed as a drop-in replacement forIEEE 754 compliant arithmetic in 2017 [3]. The posit num-ber format has several absolute advantages over IEEE 754compliant arithmetic such as higher accuracy, higher dynamicrange, simpler hardware implementation for arithmetic oper-ations, lower area and energy footprints [9]. Besides, it isshown in the literature that m-bit posit adders/multipliers cansafely replace n -bit IEEE 754 compliant adders/multiplierswhere m < n [4]. Hence, posit representation confirms moreinformation-per-bit compared to its IEEE 754 counterpartrepresentation. Furthermore, with posit representation, thereare no redundant representations and the overflow/underflowin the computations is nonexistent with posit arithmetic. Thesubnormal numbers are handled in a normal way with positrepresentation unlike IEEE 754 representation and there areonly two exception cases: zero and not-a-real (NaR). For allother cases, the value val of a posit is given by val =( − sign × useed k × exp × (1 + fn − (cid:88) i =1 b fn − − i − i ) (2)The regime indicates a scale factor of useed k where useed =2 es and es is the exponent size. The numerical value of k is determined by the run length of 0 or 1 bits in the stringof regime bits. The use of run-length encoding of the regimeautomatically allows more fraction bits for the more commonvalues for which magnitudes are closer to 1, and thus providestapered accuracy in a bit-efficient way. Further details aboutthe posit number format and posit arithmetic can be foundin [3]. The posit format and IEEE 754-2008 compliant numberformats are depicted in Fig. 1.In our experiments, we have used IEEE 754 compliant 32-bit (single precision) floating-point numbers and 32-bit positnumbers with es = 2 that are commonly used. B. Soft Errors
The sharp technology scaling in new generation integratedcircuits accentuates the sensitivity of electronic circuits. As amatter of fact, embedded systems are becoming remarkablysensitive to soft errors. These errors result from a voltagetransient event induced by alpha particles from packagingmaterial or neutron particles from cosmic rays [10]. Thisevent is created due to the collection of charge at a p-n junction after a track of electron-hole pairs is generated. Inpast technologies, this issue was considered in a limited rangeof applications in which the circuits are operating under ag-gressive environmental conditions like aerospace applications.Nevertheless, shrinking transistor size and reducing supplyvoltages in new hardware platforms bring soft errors to groundlevel mainstream applications [11] [12].III. R
ELATED W ORK
Since soft errors became a challenging threat to reliability,numerous published work proposes error-resilient memories.Architecture level error resilience techniques such as singleerror correction double error detection (SECDED) have beenproposed and widely used for memory protection [13]. Themain drawback of SECDED is its area overhead and thesupplementary latency leading to performance loss. A fault-tolerant architecture presented in [14] combines both parityand single redundancy to enhance memories’ reliability. Theweakness of these techniques is their area, power and delayoverheads due to the additional memory cells and supportingcircuits required for error detection and correction.Circuit-level techniques have been proposed to overcomearchitecture-level overheads. These techniques enhance errorresilience at circuit level either by slowing down the responseof the circuit to transient events or by increasing its criticalcharge. Methods such as [15] suggest to harden the cell usinga pass transistor that is controlled by a refreshing signal.Hardened memory cells were proposed in [16], [17] and [18]that add redundant transistors to the 6T-SRAM to increasethe cell critical charge. A Schmitt trigger-based technique[19] proposes a hardened 13-T memory cell. However, thistechnique slows down memory due to a Schmitt trigger’shysteresis temporal characteristics. In the context of emergingapproximate computing applications, recent works like [20]proposed a trade-off between reliability and computing pre-cision. To assess the reliability level at an early stage, faultinjection can be performed in simulation. All these techniquesdo not take into account the actual data representation thatis stored within the protected memories, especially numericalvalues.A number of researchers have approached the reliabilityissue in numerical algorithms. The vast majority of themtreat an algorithm as a black-box and track the behavior ofthese applications when running with injected soft errors.In [21], a study on soft error propagation in floating-pointprograms is presented. In [22], the behavior of various Krylovmethods is analysed. The authors track the variance in iterationcount based on the data structure that experiences the bit flip.Authors in [23] analyzed the impact of bit flips in a sparsematrix-vector multiply (SpMV). Exemplifying the conceptof black-box analysis of bit flips, [24] presents BIFIT forcharacterizing applications based on their vulnerability to bitflips.While these techniques study the reliability of applicationsbased on floating-point formats, none of them study theinherent sensitivity level of floating-point representations. This ig. 2. Fraction bits in IEEE 754-2008 compliant number and posit compliantnumber paper proposes a comparative study of the inherent sensitivityto errors in IEEE-754 compliant floating-point and positrepresentations.IV. P
ROPOSED M ETHODOLOGY
We cover analyses for SEU and MBU considering differentaspects. Since float and posit have different data formats asshown in Fig. 1, a single or multiple bit flip event in a 32-bitnumber results in a new different number for both formats. Inour analyses, we consider bit flips in fraction, and exponentfor both formats as well as regime bits for posit numbers. Forour theoretical analyses, we use numbers f , p ∈ R , where f is compliant to IEEE 754-2008 and p is a posit number,and ≤ s , s ≤ , s ∈ N . s , s are the positions of the bitflips in f and p , s (cid:54) = s . For both the number formats, weassume that the SEU and MBU occur at the same position. A. Fraction bits
The total number of fractional bits are in IEEE 754-2008 and
23 + m in posit compliant number respectively. m -bits are appended in the fraction part in a posit compliantnumbers to the left of the fraction bits. Let be b b b , ...b and a m , a m , ...a , two binary numbers that representfraction parts of an IEEE 754 compliant number and a positcompliant number respectively. A representative diagram tounderstand the bit flip phenomena in the fraction part is shownin Fig. 2.The largest error that can occur in IEEE 754-2008 compliantnumber due to a bit flip in the fraction part is the bit flip of b ( s = 23 ). A flip from to or vice-versa would resultin subtraction or addition of . in the decimal value of thefraction part. On the other hand, in posit to have the similarimpact, s has to be at
23 + m bit position. In general, a bitflip in the fraction part of IEEE 754-2008 compliant number is s then similar impact in the fraction part of a posit compliantnumber can be observed if there is a bit flip in the position s + m . The value of m depends on the configuration of theposit number. For example, a -bit posit number can have k= − to regime bits since regime bits are calculated basedon run-length of ’0’ or ’1’ from the most significant bits afterthe sign bit (refer equation 2. In practical scenarios the run-length of ’0’ or ’1’ is not expected to be very large since alarge k results in a very high dynamic range for the numbers. k = 5 and es = 2 configuration results in the dynamic rangethat is similar to the IEEE 754 compliant number. In general, m = exp size − k − es where exp size is the exponent size inIEEE 754 compliant number and es is the posit exponent size.Since, in the most realistic scenarios exp size > ( k + es ) , the Fig. 3. Toolflow of fault injection process in both posit (32,2) configurationand IEEE-754 compliant single-precision floating-point formats (open-sourceavailable in [8] bit flip in the position s in IEEE 754 compliant number andposit number would result in smaller error in the posit number.In case of double bit-flip, the second bit flip position being s , assuming that the second bit flip occurs in the samelocations in an IEEE 754 compliant number and a positnumber, the error due to the second bit flip is higher in theIEEE 754 compliant number. The higher error is due the thehigher weight associated with the bit position in IEEE 754compliant number compared to the posit number. B. Exponent and regime bits
A single bit flip in the exponent of IEEE 754 compliantnumbers and posit numbers injects a higher error impact in theIEEE 754 compliant number due to the phenomena explainedin Section IV-A is applicable to exponent bits as well. Due tomore weight associated with the position in the exponent partof the IEEE 754 compliant number compared to the exponentpart of the posit number, the error incurred is higher in theIEEE 754 compliant number. A bit flip in the regime part ofposit incurs higher error compared to the bit flip in the bits to of the float section due to higher weight associated withthe posit number. Similarly, second bit flip in the exponentincurs lower error in posit compared to an IEEE 754 compliantnumber while the second bit flip in regime section of positnumber results in higher error. In the subsequent section, wepresent toolflow to validate our claims. C. Toolflow
To assess the impact of errors on both posit and IEEE 754-2008 compliant representations, we proceed to an exhaustivefault injection exploration process. We modified the positpublic implementation [25] to support our fault injectionmechanism. Besides, we built an exhaustive exploration plat-form shown in Fig. 3. The idea is to focus on the actualarithmetic representation of the data instead of a coarse grainprobabilistic study or a very fine grain circuit simulation.Fig. 3 explains the followed methodology to assess theinherent reliability of the two tested representations. Since weare considering reliability from a hardware perspective, we aresticking to the actual bit-level data representation. In this pa-per, we focus on IEEE 754 single precision floating point and ( N, es ) = (32 , (where N is width of the representation and es is exponent size) posit representations for our experiments. a) (b)Fig. 4. (a) Mean Relative Error distance comparison between posit and IEEE 754 compliant float under single event upset injection (b) Error distancecomparison between posit and IEEE-754 compliant float under double event upset injection Hence, from a raw 32-bit word, we generate the correspondingfloating-point and posit numbers. For a fair comparison, theerrors are injected exactly in the same respective bit of thetwo tested representations. For double bit upsets as well, wechoose the same locations for bitflips in both representations.The inherent reliability of the two representations is assessedby quantifying the mean relative error distance (MRED) of acorrupted value from a golden (non-corrupted) value as shownin Equation 3.
M RED = 132 ∗ (cid:88) i =0 | V i − V ∗ i | V i (3)Where V i and V ∗ i are the golden and the corrupted valuerespectively when a fault is injected in a bit i . MRED givesan insight on the mean impact of bit flips that are injected inall words’ 32 bits exhaustively.V. E XPERIMENTAL S ETUP AND R ESULTS
The experiments are divided into two categories: • The first is a comparative application-agnostic explorationof the fault injection impact on reliability of posit andIEE-754 compliant representations. • The second is a comparative reliability study on a set ofML systems on two different applications tested underfault injection.This section details the experimental setup and discusses theresults.
A. Exhaustive comparative reliability exploration
This set of experiments follow the methodology presentedin Section IV. The toolflow is implemented in C using theSoftPosit platform [25] for posit and a developed bit-wisefault injection platform for IEEE 754 compliant numbers.The experiments are run on a 3 GHz Intel Core i7 processorrunning the OS X 10.9.5 operating system.
1) Single Event Upset:
The above-explained experimentalsetup aims at exploring the impact of bit flips on a givennumerical data representation in an exhaustive manner. Theresults shown in Fig. 4(a) expose in a logarithmic scale thecomparison between posit and IEEE 754 compliant floating-point representations’ inherent resiliency to bit flips. Thecomparison is performed based on the MRED between thegolden value (without fault injection) and the corrupted onein both posit and IEEE 754 floats. The results shown in Fig.4(a) represent a geometric superposition where the IEEE 754compliant floating-point graph is in most of the cases abovethe posit graph. This indicates that the posit representationis globally more error resilient than IEEE 754 compliantrepresentation. In fact, in more than 95% of the exploredcases, a bit flip in an IEEE 754 compliant number deviatesfrom the golden data more than the posit number. Moreover,we registered only 31 cases of not a real (NaR) with posit,which represents 0.7E-6% of the fault injections. On the otherhand, for IEEE compliant floating-point, more than 4% ofthe fault injections resulted in not a number (NaN). Thesecases correspond to non-representable data in the IEEE 754compliant floating-point graph of Fig. 4(a).
2) Double Event Upset:
Starting from 40nm technology,more than 35% of bit upsets are MBUs. Therefore, it is im-portant to consider this phenomenon in reliability assessmentprocesses. In this section, we track the impact of double bitupsets on the data representation for both posit and IEEE 754compliant floating-point representations.Following the same fault injection exploration mechanismas shown in Fig. 3, we evaluate the impact of two-bit flips onthe MRED between posit and IEEE-compliant floating point.We inject two bit flips at every iteration: the first is injectedexhaustively bit-wise, and the second location is randomlyselected following a normal distribution among the remainingbits. Fig. 4(b) shows the MRED caused by two-bit upsets inboth representations. Injecting 2 bit flips results globally inhigher error magnitudes. However, the results still confirm the .
84 0 .
84 0 .
78 0 .
63 0 .
84 0 .
79 0 .
84 0 .
76 0 .
76 0 . .
77 0 .
72 0 .
70 0 .
50 0 .
75 0 .
55 0 .
77 0 .
63 0 .
62 0 . .
31 0 .
29 0 .
27 0 .
27 0 .
30 0 .
31 0 .
27 0 .
29 0 .
28 0 . SVM (linearKernel) SVM (GaussianKernel) SVM (CubicKernel) Decision Trees Discriminantanalysisclassifiers Naive Bayesclassifiers KNN classifiers(K=1) AdaBoostclassifier Random forest Neural networkclassifiers (MLP)
Accuracy Without Error Accuracy Posit Error Accuracy Floatting point Error
Fig. 5. Human action recognition rates using statistical features .
86 0 .
85 0 .
80 0 .
57 0 .
83 0 .
79 0 .
84 0 .
72 0 .
78 0 . .
80 0 .
74 0 .
73 0 .
51 0 .
75 0 .
58 0 .
79 0 .
62 0 .
65 0 . .
32 0 .
29 0 .
28 0 .
27 0 .
31 0 .
32 0 .
29 0 .
28 0 .
29 0 . SVM (linearKernel) SVM (GaussianKernel) SVM (CubicKernel) Decision Trees Discriminantanalysisclassifiers Naive Bayesclassifiers KNN classifiers(K=1) AdaBoostclassifier Random forest Neural networkclassifiers (MLP)
Accuracy Without Error Accuracy Posit Error Accuracy Floatting point Error
Fig. 6. Human action recognition rates using wavelet features. .
553 0 .
620 0 .
640 0 .
471 0 .
715 0 .
473 0 .
615 0 .
595 0 .
605 0 . .
504 0 .
576 0 .
573 0 .
458 0 .
612 0 .
443 0 .
643 0 .
552 0 .
574 0 . .
445 0 .
515 0 .
520 0 .
420 0 .
519 0 .
417 0 .
517 0 .
518 0 .
521 0 . SVM (linearKernel) SVM (GaussianKernel) SVM (CubicKernel) Decision Trees Discriminantanalysisclassifiers Naive Bayesclassifiers KNN classifiers(K=1) AdaBoostclassifier Random forest Neural networkclassifiers (MLP)
Accuracy Without Error Accuracy Posit Error Accuracy Floatting point Error
Fig. 7. Biometric ECG authentication rates using statistical features. .
590 0 .
550 0 .
654 0 .
479 0 .
733 0 .
477 0 .
624 0 .
605 0 .
610 0 . .
577 0 .
550 0 .
588 0 .
454 0 .
625 0 .
458 0 .
615 0 .
583 0 .
578 0 . .
469 0 .
480 0 .
506 0 .
410 0 .
512 0 .
413 0 .
538 0 .
518 0 .
510 0 . SVM (linearKernel) SVM (GaussianKernel) SVM (CubicKernel) Decision Trees Discriminantanalysisclassifiers Naive Bayesclassifiers KNN classifiers(K=1) AdaBoostclassifier Random forest Neural networkclassifiers(MLP)
Accuracy Without Error Accuracy Posit Error Accuracy Floatting point Error
Fig. 8. Biometric ECG authentication rates using wavelet features. higher inherent error resilience of posit shown with the singlebit upset experiments.The error resilience in posit is due to two main reasons:the variable size of the scale factor (regime bits) and a largernumber of bits in the fractional part for the vast majority ofcases. The larger number bits in the fractional part is due tovariable-sized regime bits. An SEU or MBU in the fractional part results in a lower error compared to an SEU or MBU inthe regime bits or exponent bits in posit. Since the IEEE 754-compliant representation has more exponent bits compared tothe regime bits and the exponent bits in the posits, the resultingerror is higher in IEEE 754 compliant floating-point numberscompared to posits. Better error resilience of posit data-typemakes it the right choice for mission-critical next-generationystems. Moreover, the absence of redundant representationssuch as NaNs is a supplementary factor that enhances positrobustness to errors.
B. Machine-learning applications
Recent attacks on ML applications are based on deliberatefault-injection techniques [26]. In this subsection, we show theresults of fault injection experiments applied in a set of MLapplications. We evaluate two computer-vision systems. Thefirst is a biometric authentication system using Electrocardio-gram (ECG) signals based on LATIS ECG Database [27]. Thesecond is a human action recognition (HAR) system usingkinematic accelerometer signals trained with Berkley MHADdataset [28]. For the features extraction phase, two types offeatures were chosen: • Temporal features such as the mean, the standard devia-tion, the quadratic mean, and the covariance. • Time-frequency type characteristics resulting from theWavelet transformation.We used the sliding window method to extract the character-istics of each window. These characteristics are subsequentlyconcatenated in a descriptor vector.For the classification phase, we evaluate a set of most widelyused classifiers in the literature. These ML techniques are:the Support Vector Machines (SVM) with linear, Gaussianand Cubic kernels, Decision Trees, Discriminant analysisclassifiers, Naive Bayes classifiers, KNN classifiers (K =1), AdaBoost classifier, Random forest and Neural networkclassifiers. Figures 5, 6, 7 and 8 show the recognition rateof the different techniques and settings with and withoutfault injection. These figures show the impact of single faultinjection in the input of the different classifiers for bothIEEE floating point and posit data representation. In all thesecases with varying features and classifiers, the fault injectionimpact is significantly lower on the posit implementationwhich confirms the findings in Section V-A. In fact, while theoverall accuracy drop in posit under fault injection varies from and , the accuracy drop in IEEE-compliant floatingpoint varies between and .VI. C ONCLUSION
This paper investigates the reliability of two prominentdata representations, namely IEEE 754 compliant single pre-cision and (32,2) posit representation. Firstly, we presenteda brief theoretical analysis of both the number formats fora single bit flip and double bit flip. An exhaustive faultinjection platform is implemented, and the exploration led toa promising conclusion for posit arithmetic that corroboratedto our theoretical analysis. To further illustrate this finding,we conduct a benchmark of several ML techniques underfault injection. The experiments demonstrate higher inherentrobustness of posit compared to the classical IEEE 754 repre-sentation. These findings are useful for safety-critical systemsdesign. They can also be exploited for limiting imprecision inapproximate computing designs. Future work will tackle theimplementation of a full posit-based processor architecture. R
EFERENCES[1] I. C. et al., “Impact of technology scaling on sram soft error rates,”
IEEE Trans. Nucl. Sci. , vol. 61, no. 6, pp. 3512–3518, Dec. 2014.[2] B. P. Sanches et al. , “J-swfit: A java software fault injection tool,” in , April2011, pp. 106–115.[3] Gustafson et al. , “Beating floating point at its own game: Positarithmetic,”
Supercomput. Front. Innov.: Int. J. , vol. 4, no. 2, p. 71–86,Jun. 2017. [Online]. Available: https://doi.org/10.14529/jsfi170206[4] R. Chaurasiya et al. , “Parameterized posit arithmetic hardware genera-tor,” in
ICCD 2018 , Oct 2018, pp. 334–341.[5] S. Nambi et al. , “Expan(n)d: Exploring posits for efficient artificialneural network design in fpga-based systems,” arXiv 2020.[6] R. Jain et al. , “CLARINET: A RISC-V Based Framework for PositArithmetic Empiricism,” arXiv.org 2020.[7] V. Saxena et al. , “Brightening the optical flow through posit arithmetic,”in
International Symposium on Quality Electronic Design (ISQED) , Apr.2021.[8] Github repository for posit and IEEE 754 compliant fault injection plat-form. [Online]. Available: https://github.com/ihstein/posit FP reliability[9] A. Guntoro et al. , “Next generation arithmetic for edge computing,” in , 2020, pp. 1357–1365.[10] J. Ziegler et al. , “Ibm experiments in soft fails in computer electronics,”
IBM Journal of Research and Development , vol. 40, no. 1, 1996.[11] H. Quinn et al. , “Terrestrial-based radiation upsets: a cautionary tale,”in
FCCM 2005 , April 2005, pp. 193–202.[12] G. Just et al. , “Soft errors induced by natural radiation at ground levelin floating gate flash memories,” in
IRPS 2013 , April 2013, pp. 3D.4.1–3D.4.8.[13] P. Reviriego et al. , “Error detection in majority logic decoding ofeuclidean geometry low density parity check (eg-ldpc) codes,”
IEEETrans. VLSI Syst. , vol. 21, no. 1, Jan 2013.[14] I. Alouani et al. , “Parity-based mono-copy cache for low power con-sumption and high reliability,”
RSP 2012 , Oct 2012.[15] B. S. Gill et al. , “A new asymmetric sram cell to reduce soft errors andleakage power in fpga,” in
DATE 2007 , ser. DATE’07, Apr 2007.[16] X. Liu et al. , “A novel soft error immunity sram cell,” in
IRW 2013 ,ser. IRW ’13, Oct 2013.[17] J. Guo et al. , “Novel low-power and highly reliable radiation hardenedmemory cell for 65 nm cmos technology,”
Circuits and Systems I:Regular Papers, IEEE Transactions on , vol. 61, no. 7, pp. 1994–2001,July 2014.[18] I. Alouani et al. , “As8-static random access memory (sram): asymmetricsram architecture for soft error hardening enhancement,”
IET Circuits,Devices Systems , vol. 11, no. 1, pp. 89–94, 2017.[19] S. Lin et al. , “Analysis and design of nanoscale cmos storage elementsfor single-event hardening with multiple-node upset,”
Device and Mate-rials Reliability, IEEE Transactions on , vol. 12, no. 1, pp. 68–77, March2012.[20] D. Shin et al. , “Approximate logic synthesis for error tolerant applica-tions,” in
DATE 2010 , March 2010, pp. 957–960.[21] S. Li et al. , “Soft error propagation in floating-point programs,” in
International Performance Computing and Communications Conference ,Dec 2010, pp. 239–246.[22] V. Howle et al. , “The effects of soft errors on krylov methods,”
SIAMParallel Processing , Feb. 2017.[23] M. Shantharam et al. , “Characterizing the impact of soft errors oniterative methods in scientific computing,” in
ICS 2011 , ser. ICS ’11.New York, NY, USA: ACM, 2011, pp. 152–161. [Online]. Available:http://doi.acm.org/10.1145/1995896.1995922[24] D. Li et al. , “Classifying soft error vulnerabilities in extreme-scalescientific applications using a binary instrumentation tool,” in
SC ’12:Proceedings of the International Conference on High PerformanceComputing, Networking, Storage and Analysis , Nov 2012, pp. 1–11.[25] C. Leong. (2018, Nov.) Softposit version 0.4.1rc. [Online]. Available:https://gitlab.com/cerlane/SoftPosit[26] V. Venceslai et al. , “Neuroattack: Undermining spiking neural networkssecurity through externally triggered bit-flips,” 2020.[27] T. Hamdi et al. , “A novel feature extraction method in ecg biometrics,” in
International Image Processing, Applications and Systems Conference ,Nov 2014, pp. 1–5.[28] F. Ofli et al. , “Berkeley mhad: A comprehensive multimodal humanaction database,” in