Interactive Verifiable Polynomial Evaluation
aa r X i v : . [ c s . CC ] J u l Interactive Verifiable Polynomial Evaluation
Saeid Sahraei ∗ Mohammad Ali Maddah-Ali † Salman Avestimehr ‡ Abstract
Cloud computing platforms have created the possibility for computationally limited users todelegate demanding tasks to strong but untrusted servers. Verifiable computing algorithms help buildtrust in such interactions by enabling the server to provide a proof of correctness of his results whichthe user can check very efficiently. In this paper, we present a doubly-efficient interactive algorithmfor verifiable polynomial evaluation. Unlike the mainstream literature on verifiable computing, thesoundness of our algorithm is information-theoretic and cannot be broken by a computationallyunbounded server. By relying on basic properties of error correcting codes, our algorithm enforcesa dishonest server to provide false results to problems which become progressively easier to verify.After roughly log d rounds, the user can verify the response of the server against a look-up tablethat has been pre-computed during an initialization phase. For a polynomial of degree d , we achievea user complexity of O ( d ǫ ) , a server complexity of O ( d ǫ ) , a round complexity of O (log d ) and aninitialization complexity of O ( d ǫ ) . The recent rise in cloud computing platforms has created an increasing demand for verifiable computingprotocols. A commercial server that sells its computation power to the users, is incentivized to returnfalse results, if by doing so it can reduce its computation overhead and oversell its services. Therefore,a user who delegates a computationally demanding task to a server must be able to verify that theresults are indeed correct. Verifiable computing algorithms require the server to send a “proof” to theuser along with the results of computation [14]. By investigating this proof, a user is convinced withhigh probability that the server is honest. Clearly, a practical verifiable computing algorithm must be“doubly-efficient” [16]: it must have a super-efficient verifier (user) and an efficient prover (server). Putdifferently, if the complexity of computing the original function is χ , the prover’s complexity must becomparable to χ , and the verifier’s complexity must be substantially smaller than χ .The study of verifiable computing has led to novel cryptographic algorithms which are eitherapplicable to arbitrary functions [15, 25] or tailored to computation of a specific function [13, 9]. Theformer has led to the development of Quadratic Arithmetic Programs and zkSNARKs for arithmeticcircuits, while the latter has resulted in highly efficient and easy-to-implement algorithms that canbe applied to, say, polynomial evaluation or matrix multiplication. Despite their enormous success,the security of cryptographic algorithms depends on hardness assumptions about certain mathematicalproblems. Algorithmic breakthroughs or technological advancements may falsify these assumptions atany time. This creates a demand for verifiable computing algorithms which remain secure in the face ofcomputationally unbounded provers.Our objective in this work is to design an information-theoretic algorithm for verifiable polynomialevaluation. Information-theoretic in this context means that the soundness of the algorithm is ∗ University of Southern California, Los Angeles, CA 90089, USA. Email: [email protected] † Nokia Bell Labs, Holmdel, NJ 07733, USA. Email: [email protected] ‡ University of Southern California, Los Angeles, CA 90089, USA. Email: [email protected] undamental, and does not rely on hardness assumptions. An example of how such an algorithm canbe applied in practice is blockchain networks where the capacity to verify a newly mined block againstthe history of the transactions is the distinguishing factor between a light node and a full node. Thisblock verification process can generally be modelled as an instance of polynomial evaluation [20]. Thefull nodes can thus convince the light nodes of the correctness of a block via a verifiable polynomialevaluation algorithm.Our setup is very similar to the classical notion of interactive proof systems, with one subtle difference:we allow for a one-time initialization or pre-processing phase during which the verifier may perform acomputationally heavy task. This is acceptable because in the verifiable computing literature, it isgenerally assumed [14, 9, 13] that after this initialization phase, the verifier and the prover will engage inevaluating the function at many inputs. Therefore, this initialization cost amortizes over many roundsand can be neglected.
Consider a polynomial of degree d − , f ( x ) = a + a x + · · · + a d − x d − .A verifier wishes to evaluate this polynomial at x ∈ { x , x , · · · } with the help of a prover. Similar to[16], we require the algorithm to be doubly-efficient with a small round complexity. On the otherhand, we deviate from the classical notion of proof systems and allow the verifier to perform a one-timecomputationally heavy task. We impose the following performance criteria on the algorithm. • Efficient initialization : the verifier is allowed to perform a one-time initialization phase andstore the outcome for future reference. Although this phase is run only once, its complexity shouldbe comparable to the complexity of computing f ( x ) . • Super-Efficient verifier : for each x ∈ { x , x , · · · } , the complexity of the verifier should benegligible compared to the complexity of computing f ( x ) . • Efficient prover : for each x ∈ { x , x , · · · } , the complexity of the prover should be comparableto the complexity of computing f ( x ) . • Small round complexity : the number of rounds of interaction between the prover and theverifier must be polylogarithmic in d . • Completeness : if the prover is honest, the verifier should accept his results with probability . • (Information-theoretic) soundness : If the prover is dishonest, the verifier should be able toreject his results with probability at least . An interactive proof (IP) system [17, 4] is an interactive protocol that enables aprover to convince a computationally-limited verifier of the correctness of a statement. It is well-knownthat IPs are very strong tools: any problem in PSPACE admits an interactive proof with polynomialcomplexity for the verifier [23, 31]. Nevertheless, the recent rise in cloud computing platforms has led toseveral intriguing questions surrounding the practicality of such algorithms. In particular, the proversin [23, 31] run in exponential time, making them unfit for real-world commercial servers. The conceptof doubly-efficient interactive proofs was first introduced in [16] followed by [28, 27]. In this context,“doubly-efficient” means that the prover must run in polynomial time and the verifier must be “super-efficient”, i.e., his complexity must be close to linear in the size of the problem. Unfortunately, in manypractical scenarios, even linear complexity is unacceptable for the verifier. Concrete examples of this arewhen the problem itself can be solved in linear time, however, due to the sheer size of the problem theverifier is incapable of performing the computation alone. In such cases, a high-degree polynomial-timeprover is clearly impractical too. closely related line of work is Probabilistically Checkable Proofs (PCP) [6, 7, 3, 8, 26], where theprover is required to commit to a proof which is usually too long for the verifier to process. The verifiercan then sample this proof randomly in a few locations and be convinced of the correctness of the proofwith high probability. The celebrated PCP theorem [1, 2] states that any problem in NP admits a PCPwith verifier complexity that is polylogarithmic in the size of the problem. However, from a practicalperspective, it is not clear how this initial commitment can be implemented. One possibility is to relyon Merkle commitments with the help of collision-resistant hash functions and assume that the proveris computationally limited. Alternatively, the prover can send the entire proof to a trusted third-partywhich will be the point of contact for the verifier. However, these approaches alter the setup, and moreimportantly make strong assumptions such as the existence of trusted third parties or computationallimitation of the prover. Other PCP-based proof systems that are secure against computationally limitedprovers include [19] and [24].The notion of verifiable computing was introduced in [14]. Motivated by practical considerations,in a verifiable computing setting one assumes that a computationally limited verifier (user) delegatesa task to a prover (server). The prover must then cooperate with the verifier in computation of thefunction in such a way that the verifier remains efficient and convinced of the validity of the results.There are subtle differences between this model and the classical notion of proof systems. In particular,it is usually assumed that the verifier may run a one-time initialization phase that is computationallyexpensive. The cost of this computation is amortized over many runs of the algorithms, correspondingto the evaluation of the same function over many different inputs. In Gennaro’s framework [14], thefunction to be computed is characterized by its boolean circuit which is evaluated with the help ofYao’s garbled circuits [32, 21] and Fully Homomorphic Encryption. In a subsequent work [15, 25] whichled to the development of several zkSNARKs (zero-knowledge non-interactive arguments of knowledge)[18, 22, 10, 30], it was suggested to represent arbitrary arithmetic and logical circuits as QuadraticArithmetic Programs and Quadratic Span Programs which are then evaluated at encrypted values toproduce proofs of correctness. These algorithms can be proven secure against provers who are notpowerful enough to reverse such encryptions.Recently, a new line of work in the cryptography community has focused on the verifiable evaluationof specific functions such as high-degree polynomials or multiplication of large matrices. These functionsserve as building blocks for many applications such as machine learning and blockchain. Advances in thisarea has led to extremely efficient algorithms for verifiable polynomial evaluation [9, 13, 5, 12], matrixmultiplication [33, 13], and modular exponentiation [11]. Unfortunately, similar to the generic verifiablecomputing algorithms discussed above, these algorithms rely on unproven cryptographic assumptions.To overcome this limitations, recent efforts have focused on information-theoretic verifiable polynomialevaluation algorithms which are secure against unbounded adversarial provers [29]. Nevertheless, toachieve information-theoretic security, [29] pays a significant cost in terms of the complexity of theverifier. While the cryptographic approaches lead to logarithmic or even constant verification time, theverifier in [29] runs in O ( √ d ) . we design an information-theoretic verifiable polynomial evaluation algorithmwhich is super-efficient for the verifier and efficient for the prover. Even though our algorithm isinteractive, the number of interactions only grows logarithmically as a function of the degree of thepolynomial. Similar to other verifiable computing algorithms, in our setting the verifier performs a one-time initialization phase whose complexity is comparable to the complexity of evaluating the polynomialonce. Theorem 1.1.
There exists a public-coin verifiable polynomial evaluation algorithm with initializationomplexity of O ( d ǫ ) , verifier complexity of O ( d ǫ ) , prover complexity of O ( d ǫ ) , and round complexityof O (log d ) which satisfies the completeness and soundness properties as defined in Section 1.1. Similar in nature to many other interactive proof systems[23, 6, 8], the core idea behind our algorithm is to enforce a dishonest prover to provide false results onproblems that become progressively easier to verify throughout the interactions. To be more specific, letus look at a simple example where the polynomial f ( x ) = a + a x + · · · + a d − x d − is to be evaluatedwhere d is even. Suppose the prover provides a false answer ˆ f ( x ) = f ( x ) . The protocol then requires theprover to provide his evaluation of two more functions, namely f (0) ( y ) = a + a y + a y + · · · + a d − y d − and f (1) ( y ) = a + a y + a y + · · · + a d − y d − , both evaluated at y = x . Obviously, we musthave f (0) ( y ) + xf (1) ( y ) = f ( x ) , which can be easily checked by the verifier. Since ˆ f ( x ) = f ( x ) , theprover must either provide ˆ f (0) ( y ) = f (0) ( y ) or ˆ f (1) ( y ) = f (1) ( y ) in order to pass the verification ˆ f (0) ( y ) + x ˆ f (1) ( y ) = ˆ f ( x ) . Voilà, the prover has been forced to lie again. Note also that the degree of f (0) and f (1) is only d/ . The verifier cloud then climb down the branches of this binary tree, and after log d rounds, verify the correctness of all the constant polynomials corresponding to the leaves of thetree. But this would take him O ( d ) to accomplish. Instead, he computes a random linear combination α ˆ f (0) ( y )+ β ˆ f (1) ( y ) and uses this as the reference point for the next iteration. Provided that these randomcoefficients are selected over sufficiently large sets, we will have α ˆ f (0) ( y ) + β ˆ f (1) ( y ) = αf (0) ( y ) + βf (1) ( y ) with high probability and the error will propagate to the next iteration of the algorithm. After log d iterations, the verifier is left with a constant polynomial that he needs to verify. We note that thisalgorithm is inspired by the Probabilistically Checkable Proof of Proximity recently proposed in [6](despite this, our algorithm is clearly not a PCP but an IP).As it turns out, the fact that the degree of the polynomial decreases by a factor of at eachiteration does not immediately translate to an easier verification. Unfortunately, each coefficient of thenewly constructed polynomial is now a linear combination of two coefficients in the original polynomial.Therefore, evaluating the new polynomial from scratch still takes O ( d ) . In particular, after log d iterations, we are left with a constant which is a (random) linear combination of all the d coefficientsof the original polynomial. To help the verifier ensure the validity of this linear combination, we allowfor an initialization phase, whereby the verifier computes all the possible outcomes of the algorithm andstores them in a look-up table in his memory. For this to be feasible, we need the random coefficients tobe selected from a set that is not too large. By properly choosing the size of this set, we guarantee thatthe look-up table is of size O ( d ǫ ) and that it can be computed in time O ( d ǫ ) . The verifier can thencheck the validity of the constant polynomial at the end of log d iterations by comparing the returnedresult to the corresponding entry in the look-up table. In this section we describe an interactive algorithm for verifiable evaluation of a function f ( x ) = a + a x + · · · + a d − x d − over members of a finite field F . The coefficients a , · · · , a d − are arbitrarymembers of F . Let η > be an integer and let c > , c ∈ R be such that cη ∈ Z . These twoare design parameters that can depend on d in general. Throughout this section, we will assume that c is a constant, and that η grows at most polylogarithmically with d . Let L ⊆ F be a set of size cη andlet H ⊂ L be of size η . Let us represent the members of L as L = { α , · · · , α cη − } . Without loss ofgenerality, we can assume H = { α , · · · , α η − } . The sets L and H are publicly known. For i ∈ [0 : η − efine Z i ( β ) = Y j ∈ [0: η − \{ i } β − α j α i − α j . (2.1)Note that for i ∈ [0 : η − and k ∈ [0 : η − we have Z i ( α k ) = 1 if i = k and if i = k . Let r = log d log η . Tosimplify the matters, we assume r is an integer (otherwise, we will set r = ⌈ log d log η ⌉ ). Iteratively, define thepolynomials g ( b , ··· ,b ℓ − ) ( α, x ) and f ( b , ··· ,b ℓ ) ( x ) , for ℓ ∈ [0 : r ] and ( b , · · · , b ℓ ) ∈ [0 : cη − ℓ as follows. If f ( b , ··· ,b ℓ − ) ( x ) = e + e x + e x + · · · then g ( b , ··· ,b ℓ − ) ( α, x ) = Z ( α )[ e + e η x + e η x + · · · ] (2.2) + Z ( α )[ e + e η +1 x + e η +1 x + · · · ] (2.3) + · · · (2.4) + Z η − ( α )[ e η − + e η − x + e η − x + · · · ] , (2.5)and f ( b , ··· ,b ℓ ) ( x ) = g ( b , ··· ,b ℓ − ) ( α b ℓ , x ) . (2.6)As the starting point of the iteration, let f ( b , ··· ,b ℓ − ) ( x ) = f ( x ) for ℓ = 1 . Note that g ( b , ··· ,b ℓ ) ( α, x ) is apolynomial of degree η − as a function of α . Furthermore, note that X b ℓ ∈ [0: η − x b ℓ f ( b , ··· ,b ℓ ) ( x η ) = f ( b , ··· ,b ℓ − ) ( x ) . (2.7)Define h ( b , · · · , b r ) as h ( b , · · · , b r ) = f ( b , ··· ,b r ) ( x ) , ∀ ( b , · · · , b ℓ ) ∈ [0 : cη − ℓ . (2.8)Since f ( b , ··· ,b r ) ( x ) is a constant function of x , h ( b , · · · , b r ) only depends on ( b , · · · , b r ) and not x . In the initialization phase, the verifier computes and stores h ( b , · · · , b r ) in a look-up table for all possible ( b , · · · , b r ) ∈ [0 : cη − r . In total, the number ofevaluation points is λ = ( c · η ) r = c r · d. (2.9)We will now describe an algorithm for efficient computation of the look-up table. We will show that dueto the recursive structure of h ( · ) , its evaluation at all λ points can be computed in O ( c r · d · η ) . A naive approach to obtaining the λ = c r · d entries of the look-up table is to compute them individually, which would take O ( λ · d ) . However, dueto the recursive structure of the function h ( · ) , we can be much more efficient. Specifically, let a ( b , ··· ,b ℓ ) i be the coefficient of x i in f ( b , ··· ,b ℓ ) ( x ) for each ℓ ∈ [0 : r ] and all ( b , · · · , b ℓ ) ∈ [0 : cη − ℓ . We have thefollowing recursion a ( b , ··· ,b ℓ ) i = X j ∈ [0: η − a ( b , ··· ,b ℓ − ) j + iη Z j ( α b ℓ ) , ∀ i ∈ [0 : dη ℓ − , ( b , · · · , b ℓ ) ∈ [0 : cη − ℓ , (2.10) lgorithm 1 Efficient Computation of the Look-up Table
Input: ( a , · · · , a d − ) , η , c , L = { α , · · · , α cη − } . Output:
The look-up table h a () i = a i , for i ∈ [0 : d − . Z i,j = Q k ∈ [0: η − \{ i } α j − α k α i − α k for i ∈ [0 : η − and j ∈ [0 : cη − . r = ⌈ log d log η ⌉ . for ℓ ∈ [1 : r ] do a ( b , ··· ,b ℓ ) i = P j ∈ [0: η − a ( b , ··· ,b ℓ − ) j + iη Z j,b ℓ for all i ∈ [0 : dη ℓ − and ( b , · · · , b ℓ ) ∈ [0 : cη − ℓ . end for h ( b , · · · , b r ) = a ( b , ··· ,b r )0 for all ( b , · · · , b r ) ∈ [0 : cη − r . return h .for all ℓ ∈ [1 : r ] . Furthermore, for ℓ = 0 , we define a ( b , ··· ,b ℓ ) i = a i . When ℓ = r , the polynomial f ( b , ··· ,b r ) ( x ) is of degree zero and is equal to a ( b , ··· ,b r )0 . Therefore, we have h ( b , · · · , b r ) = a ( b , ··· ,b r )0 . (2.11)In order to compute h ( b , · · · , b r ) , we start by computing a table of size cη that stores all Z i ( α j ) ,for i ∈ [0 : η − and j ∈ [0 : cη − . Afterwards, we iteratively compute the values of a ( b , ··· ,b ℓ ) i until we reach the leaves of the tree which give us h ( b , · · · , b r ) . Since we have pre-computed the valuesof Z i ( α j ) , finding each a ( b , ··· ,b ℓ ) i based on (2.10) only takes O ( η ) . At the ℓ th level of the tree, we mustcompute a ( b , ··· ,b ℓ ) i for all ( b , · · · , b ℓ ) ∈ [0 : cη − ℓ and all i ∈ [0 : dη ℓ − . Therefore, the computationat level ℓ takes O (( c · η ) ℓ · dη ℓ · η ) = O ( c ℓ · d · η ) operations. As a result, the entire tree can be computedin c · d · η + c · d · η + · · · + c r · d · η ≤ c r · d · η · cc − = O ( c r · d · η ) . This procedure has been summarizedin Algorithm 1. For an illustration, see Figure 1. The verifier is interested in evaluating f ( x ) = a + a x + · · · + a d − x d − .We assume that both f ( · ) and x are publicly known. The prover sends ˆ f ( x ) to the verifier. If the proveris honest, then ˆ f ( x ) = f ( x ) . Otherwise, it can be an arbitrary member of F . The prover also sends theverifier ˆ f ( s ) ( x (1) ) for all s ∈ [0 : η − , where x (1) = x η . The verifier checks whether X s ∈ [0: η − x s ˆ f ( s ) ( x (1) ) = ˆ f ( x ) . (2.12)If not, he rejects the result. Next, the verifier finds the polynomial ˆ g ( α, x (1) ) of degree η − (in α ) byinterpolating the points ( α s , ˆ f ( s ) ( x (1) )) , s ∈ [0 : η − . He then chooses b ∈ [0 : cη − uniformly atrandom and finds ˆ f ( b ) ( x (1) ) = ˆ g ( α b ℓ , x (1) ) . The verifier then sends b to the prover. Next, the proversends the verifier ˆ f ( b ,s ) ( x (2) ) for all s ∈ [0 : η − where x (2) = x η . The verifier checks if X s ∈ [0: η − x ηs ˆ f ( b ,s ) ( x (2) ) = ˆ f ( b ) ( x (1) ) . (2.13)If not, he rejects the result. The algorithm now proceeds with ˆ f ( b ) ( x (1) ) taking the role of ˆ f ( x ) . Thisprocess continues until the prover sends the verifier ˆ h ( b , · · · , b r ) = ˆ f ( b , ··· ,b r ) ( x ( r ) ) . The verifier checks We are assuming that η grows at most polylogarithmically with d , so the complexity of computing Z i ( α j ) will be negligiblecompared to the overall complexity of the initialization phase. More on the proper choice of η in Section 3.1. a a a a (0)0 a (0)1 a (1)0 a (1)1 a (2)0 a (2)1 a (0 , a (0 , a (1 , a (1 , a (2 , a (2 , a (2 , a (1 , a (0 , Figure 1: Efficient computation of the look-up table. Each box represents one polynomial, starting with f ( x ) = a + a x + a x + a x at the root. The circles within each box represent the coefficients of thecorresponding polynomial. Each edge is multiplied by Z i ( α j ) where j determines the color (red = ,blue = , green = ) and i determines the style of the edge (solid for i = 0 and dashed for i = 1 ). Thetwo edges merging at each node are summed up to form the value of that node. At the base of the treewe have h ( b , b ) = a ( b ,b )0 . In this example we have d = 4 , η = 2 , c = .this against the correct value of h ( b , · · · , b r ) stored in his look-up table. If they are not equal, theresult is rejected. To amplify the prover’s probability of failure, this entire algorithm is run m times.If all the m experiments pass, the verifier accepts the result. We will see in Section 3 that the properchoice of m is ( cc − ) r . This process has been summarized in Algorithm 2 and illustrated in Figure 2.For notational simplicity, the algorithm is described as m consecutive rounds. But the m rounds can betrivially parallelized to reduce the overall round complexity to r . : If the prover is honest, he will provide the correct f ( x ) and the correct values of f ( b , ··· ,b ℓ ) ( x ( ℓ ) ) for all ℓ ∈ [1 : r ] which will clearly pass all the verification tests. (Information-theoretic) soundness : Soundness follows from the simple principle that two distinctpolynomials of degree η − must disagree on at least ( c − η + 1 points on any set of size cη . Supposethe prover starts by sending the verifier the wrong value of ˆ f ( x ) . In order to pass (2.12), he must thenprovide the verifier with at least one wrong value ˆ f ( s ) ( x ) . Because of this, the polynomial ˆ g ( α, x ) , as afunction of α , will be distinct from the correct polynomial g ( α, x ) . Due to the observation above, thesetwo polynomials will differ on at least ( c − η + 1 members of L . Therefore, if b is chosen uniformlyat random over [0 : cη − , the value of ˆ f ( b ) ( x ) = ˆ g ( α b , x ) will be different from the correct value f ( b ) ( x ) with probability at least c − c . With high probability, the error continues to propagate throughthe interactions between the verifier and the prover, until it reaches level r , at which point, the verifiercan detect it by checking it against its stored value h ( b , · · · , b r ) . The probability that an adversarialprover can successfully pass all the verifications is bounded by p ≤ c + c − c · c + ( c − c ) · c + · · · + · ( c − c ) r − · c = 1 − (1 − c ) r . (3.14) (0) ( y ) = a + a y f (1) ( y ) = a + a y f (2) ( y ) = Z ( α ) f (0) ( y ) + Z ( α ) f (1) ( y ) f (3) ( y ) = Z ( α ) f (0) ( y ) + Z ( α ) f (1) ( y ) f (2 , ( z ) = Z ( α ) f (2 , ( z ) + Z ( α ) f (2 , ( z ) f (2 , ( z ) = Z ( α ) f (2 , ( z ) + Z ( α ) f (2 , ( z ) f (2 , ( z ) = a Z ( α ) + a Z ( α ) f (2 , ( z ) = a Z ( α ) + a Z ( α ) f ( x ) = a + a x + a x + a x y = x z = y f (0) ( y ) + xf (1) ( y ) ? = f ( x ) f (2 , ( z ) + yf (2 , ( z ) ? = f (2) ( y ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α )+ a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α )+ a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) a Z ( α ) Z ( α ) + a Z ( α ) Z ( α )+ a Z ( α ) Z ( α ) + a Z ( α ) Z ( α )+ a Z ( α ) Z ( α ) + a Z ( α ) Z ( α )+ a Z ( α ) Z ( α ) + a Z ( α ) Z ( α )+ a Z ( α ) Z ( α ) + a Z ( α ) Z ( α )+ a Z ( α ) Z ( α ) + a Z ( α ) Z ( α )+ a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) + a Z ( α ) Z ( α )+ a Z ( α ) Z ( α ) + a Z ( α ) Z ( α )+ a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) + a Z ( α ) Z ( α )+ a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) + a Z ( α ) Z ( α ) ? = Figure 2: An illustration of the proposed algorithm. The blue values are provided by the prover. Thered boxed values are computed by the verifier after interpolating the blue values at the same level. Theverifier checks whether consecutive levels of the tree are consistent (identities marked by ? = ). At thelowest level, the boxed red value is compared against a pre-computed look-up table. In this example wehave d = 4 , η = 2 , c = 2 . lgorithm 2 Interactive Verifiable Polynomial Evaluation, Verifier’s Protocol
Input: x , η , c , d , Z i,j for i ∈ [0 : η − , j ∈ [0 : cη − and the look-up table h . Output:
One bit, accept or reject r = ⌈ log d log η ⌉ . ˆ f () = Input () . for i ∈ [1 : m ] do for ℓ ∈ [1 : r ] do ˆ f ( b , ··· ,b ℓ − ,s ) = Input() for all s ∈ [0 : η − . if ˆ f ( b , ··· ,b ℓ − ) = P s ∈ [0: η − x s ˆ f ( b , ··· ,b ℓ − ,s ) then return reject. end if Choose b ℓ ∼ Uniform( [0 : cη − ) and reveal b ℓ to the prover. Compute ˆ f ( b , ··· ,b ℓ ) = P s ∈ [0: η − Z s,b ℓ ˆ f ( b , ··· ,b ℓ − ,s ) . Set x = x η . end for if ˆ f ( b , ··· ,b r ) = h ( b , · · · , b r ) then return reject. end if end for return accept.The verifier proceeds to run the experiment m times and rejects the result if any of the m experimentsfail. We want to choose m such that p m < / . By choosing m = ( cc − ) r , we will have p m = (1 − m ) m < e < . (3.15) Computational Complexity of the Initialization : Based on the analysis in Section 2.2.1, theinitialization phase can be done in time C ini = O ( c log d log η · d · η ) . (3.16) Computational Complexity of the Verifier : The verifier runs m (parallel) experiments, eachconsisting of r rounds. Each round takes O ( η ) operations. Therefore, the overall complexity of theverifier is C ver = O ( m · r · η ) = O ( η · ( cc − log d log η · log d log η ) . (3.17) Computational Complexity of the Prover : An honest prover can also pre-compute the coefficientsof the polynomials f ( b , ··· ,b ℓ ) ( x ) in its own initialization phase and store them locally in order to reduceits complexity. In round ℓ , the prover must evaluate η polynomials each of degree dη ℓ . Therefore, theprover only needs to perform m ( d + dη · η + dη · η + · · · + dη r · η ) ≤ m · d · ηη − = O ( m · d ) computation . C pro = O ( m · d ) = O ( d · ( cc − log d log η ) . (3.18) Even in the absence of an initialization phase for the prover, the complexity remains O ( m · d · η ) for each of the r rounds, resultingin an overall complexity of O ( m · d · η · r ) . This is still of the form O ( d ǫ ) for the choice of η and c in Section 3.1. ound Complexity of the Algorithm : As mentioned in Section 2.3, we can run all the m experimentsin parallel. As a result, the round complexity of the algorithm is only C rnd = log d log η . (3.19) We have two parameters to play with, namely c and η . Firstly, we want to make sure that the initialization phase can be done in O ( d ǫ ) . For this purpose,we choose η = (log d ) ω for an arbitrary real number ω > . We also fix c ∈ R , c > to be a constant.To see why C ini = O ( d ǫ ) , note that C ini = c r · η · d = c log d log η · η · d = d log cω log log d · d · (log d ) ω = O ( d log cω log log d ) . Our second criterion is to achieve C ver = O ( d ǫ ) . This requirement is automatically satisfied with theabove choices of η and c . C ver = O ( η · ( cc − log d log η · log d log η ) = O ( d b log log d · (log d ) ω +1 ω log log d ) = O ( d b log log d ) , (3.20)where b = ω · log cc − is a constant. Finally, the complexity of the prover is given by C pro = O ( d · ( cc − log d log η ) = O ( d b log log d ) , (3.21)and the round complexity is C rnd = log d log η = O (log d ) . (3.22) Consider an n -variate polynomial of degree d − in each variable f ( x , · · · , x n ) = X i ∈ [0: d − · · · X i n ∈ [0: d − a i , ··· ,i n Y j ∈ [1: n ] x i j j . (4.23)A verifier wishes to evaluate this polynomial at ( x , · · · , x n ) with the help of a prover. A simple variationof Algorithm 2 can be used for this purpose. First, the verifier treats f ( x , · · · , x n ) as a univariatepolynomial in x and applies Algorithm 2 in order to reduce the degree of x to zero after r = log d log η interactions with the prover. Now, he is left with a new polynomial that only has n − variables. After n · r rounds, the number of variables will reduce to zero, and the verifier will be left with a constant thathe can check against a look-up table of size ( c · η ) nr . It is easy to see that if we resort to this modifiedalgorithm, all the results in Section 3 will remain valid, except d must be replaced with d n (the numberof terms in f ( x , · · · , x n ) ). For instance, the complexity of the verifier will be C ver,n = O ( d bn log n +log log d ) (4.24)and the complexity of the prover will be C pro,n = O ( d bn log n +log log d ) (4.25)t is also noteworthy that any improvements in the multivariate case directly translates to a betterunivariate algorithm. To see why, consider an arbitrary univariate polynomial g ( x ) of degree d n − .Without loss of generality, we can represent this polynomial as g ( x ) = X i ∈ [0: d − · · · X i n ∈ [0: d − a i , ··· ,i n x i + i d + ··· + i n d n − = f ( x, x d , · · · , x d n − ) . (4.26)Applying Algorithm 2 on g ( x ) results in a verifier complexity of O ( d bn log n +log log d ) which is the same as C ver,n . If we can design an n − variate algorithm that achieves a verifier complexity smaller than C ver,n ,we can apply it on f ( x, x d , · · · , x d n − ) and improve upon the complexity of Algorithm 2 applied on g ( x ) . Acknowledgement
The authors would like to thank Ali Rahimi for the fruitful discussions.
References [1]
Arora, S., Lund, C., Motwani, R., Sudan, M., and Szegedy, M.
Proof verification and the hardnessof approximation problems.
Journal of the ACM (JACM) 45 , 3 (1998), 501–555.[2]
Arora, S., and Safra, S.
Probabilistic checking of proofs: A new characterization of np.
Journal of theACM (JACM) 45 , 1 (1998), 70–122.[3] aszl o Babai, L., Fortnow, L., Levin, L., and Szegedy, M.
Checking computations in polylogarithmictime. In (1991), ACM, pp. 21–31.[4]
Babai, L.
Trading group theory for randomness. In
Proceedings of the seventeenth annual ACM symposiumon Theory of computing (1985), ACM, pp. 421–429.[5]
Backes, M., Fiore, D., and Reischuk, R. M.
Verifiable delegation of computation on outsourced data.In
Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security (2013), ACM,pp. 863–874.[6]
Ben-Sasson, E., Bentov, I., Horesh, Y., and Riabzev, M.
Fast Reed-Solomon interactive oracleproofs of proximity. In (2018), Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.[7]
Ben-Sasson, E., Chiesa, A., and Spooner, N.
Interactive oracle proofs. In
Theory of CryptographyConference (2016), Springer, pp. 31–60.[8]
Ben-Sasson, E., and Sudan, M.
Short PCPs with polylog query complexity.
SIAM Journal on Computing38 , 2 (2008), 551–607.[9]
Benabbas, S., Gennaro, R., and Vahlis, Y.
Verifiable delegation of computation over large datasets.In
Annual Cryptology Conference (2011), Springer, pp. 111–131.[10]
Bitansky, N., Chiesa, A., Ishai, Y., Paneth, O., and Ostrovsky, R.
Succinct non-interactivearguments via linear interactive proofs. In
Theory of Cryptography Conference (2013), Springer, pp. 315–333.[11]
Chen, X., Li, J., Ma, J., Tang, Q., and Lou, W.
New algorithms for secure outsourcing of modularexponentiations.
IEEE Transactions on Parallel and Distributed Systems 25 , 9 (2014), 2386–2396.[12]
Elkhiyaoui, K., Önen, M., Azraoui, M., and Molva, R.
Efficient techniques for publicly verifiabledelegation of computation. In
Proceedings of the 11th ACM on Asia Conference on Computer andCommunications Security (2016), ACM, pp. 119–128.[13]
Fiore, D., and Gennaro, R.
Publicly verifiable delegation of large polynomials and matrix computations,with applications. In
Proceedings of the 2012 ACM conference on Computer and communications security (2012), ACM, pp. 501–512.[14]
Gennaro, R., Gentry, C., and Parno, B.
Non-interactive verifiable computing: Outsourcingcomputation to untrusted workers. In
Annual Cryptology Conference (2010), Springer, pp. 465–482.[15]
Gennaro, R., Gentry, C., Parno, B., and Raykova, M.
Quadratic span programs and succinctNIZKs without PCPs. In
Annual International Conference on the Theory and Applications of CryptographicTechniques (2013), Springer, pp. 626–645.16]
Goldwasser, S., Kalai, Y. T., and Rothblum, G. N.
Delegating computation: interactive proofs formuggles.
Journal of the ACM (JACM) 62 , 4 (2015), 27.[17]
Goldwasser, S., Micali, S., and Rackoff, C.
The knowledge complexity of interactive proof systems.
SIAM Journal on computing 18 , 1 (1989), 186–208.[18]
Groth, J.
On the size of pairing-based non-interactive arguments. In
Annual International Conference onthe Theory and Applications of Cryptographic Techniques (2016), Springer, pp. 305–326.[19]
Kilian, J.
Founding cryptography on oblivious transfer. In
Proceedings of the twentieth annual ACMsymposium on Theory of computing (1988), ACM, pp. 20–31.[20]
Li, S., Yu, M., Avestimehr, S., Kannan, S., and Viswanath, P.
PolyShard: Coded sharding achieveslinearly scaling efficiency and security simultaneously. arXiv preprint arXiv:1809.10361 (2018).[21]
Lindell, Y., and Pinkas, B.
A proof of security of YaoâĂŹs protocol for two-party computation.
Journalof Cryptology 22 , 2 (2009), 161–188.[22]
Lipmaa, H.
Succinct non-interactive zero knowledge arguments from span programs and linear error-correcting codes. In
International Conference on the Theory and Application of Cryptology and InformationSecurity (2013), Springer, pp. 41–60.[23]
Lund, C., Fortnow, L., Karloff, H., and Nisan, N.
Algebraic methods for interactive proof systems.In
Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science (1990), IEEE, pp. 2–10.[24]
Micali, S.
Computationally Sound proofs. In
Proceedings 35th Annual Symposium on Foundations ofComputer Science (1994), IEEE, pp. 436–453.[25]
Parno, B., Howell, J., Gentry, C., and Raykova, M.
Pinocchio: Nearly practical verifiablecomputation. In (2013), IEEE, pp. 238–252.[26]
Polishchuk, A., and Spielman, D. A.
Nearly-linear size holographic proofs. In
Proceedings of thetwenty-sixth annual ACM symposium on Theory of computing (1994), ACM, pp. 194–203.[27]
Reingold, O., Rothblum, G. N., and Rothblum, R. D.
Constant-round interactive proofs fordelegating computation. In
Proceedings of the forty-eighth annual ACM symposium on Theory of Computing (2016), ACM, pp. 49–62.[28]
Rothblum, G. N., Vadhan, S., and Wigderson, A.
Interactive proofs of proximity: delegatingcomputation in sublinear time. In
Proceedings of the forty-fifth annual ACM symposium on Theory ofcomputing (2013), ACM, pp. 793–802.[29]
Sahraei, S., and Avestimehr, A. S.
INTERPOL: Information theoretically verifiable polynomialevaluation. arXiv preprint arXiv:1901.03379 (2019).[30]
Sasson, E. B., Chiesa, A., Garman, C., Green, M., Miers, I., Tromer, E., and Virza, M.
Zerocash: Decentralized anonymous payments from bitcoin. In (2014), IEEE, pp. 459–474.[31]
Shamir, A.
IP= PSPACE (interactive proof= polynomial space). In
Proceedings [1990] 31st AnnualSymposium on Foundations of Computer Science (1990), IEEE, pp. 11–15.[32]
Yao, A. C.
Protocols for secure computations. In
Foundations of Computer Science, 1982. SFCS’08. 23rdAnnual Symposium on (1982), IEEE, pp. 160–164.[33]
Zhang, X., Jiang, T., Li, K.-C., Castiglione, A., and Chen, X.
New publicly verifiable computationfor batch matrix multiplication.