Improved (Provable) Algorithms for the Shortest Vector Problem via Bounded Distance Decoding
aa r X i v : . [ c s . D S ] O c t Improved (Provable) Algorithms for the Shortest Vector Problem viaBounded Distance Decoding
Divesh Aggarwal , Yanlin Chen , Rajendra Kumar ⋆ , and Yixin Shen Centre for Quantum Technologies and National University of Singapore. email: [email protected] . Institute of Information Science, Academia Sinica, Taipei, Taiwan. email: [email protected] Indian Institute of Technology, Kanpur and National University of Singapore. email: [email protected] Universit´e de Paris, IRIF, CNRS, F-75006 France. email: [email protected]
Abstract.
The most important computational problem on lattices is the Shortest Vector Problem (
SVP ). Inthis paper, we present new algorithms that improve the state-of-the-art for provable classical/quantumalgorithms for
SVP . We present the following results.1. A new algorithm for
SVP that provides a smooth tradeoff between time complexity and memoryrequirement. For any positive integer ≤ q ≤ √ n , our algorithm takes q n + o ( n ) time and requires poly ( n ) · q n/q memory. This tradeoff which ranges from enumeration ( q = √ n ) to sieving ( q constant), is a consequence of a new time-memory tradeoff for Discrete Gaussian sampling abovethe smoothing parameter.2. A quantum algorithm that runs in time . n + o ( n ) and requires . n + o ( n ) classical memory andpoly ( n ) qubits. This improves over the previously fastest classical (which is also the fastest quantum)algorithm due to [ADRS15] that has a time and space complexity n + o ( n ) .3. A classical algorithm for SVP that runs in time . n + o ( n ) time and . n + o ( n ) space. This improvesover an algorithm of [CCL18] that has the same space complexity.The time complexity of our classical and quantum algorithms are obtained using a known upper boundof the kissing number which is . n . In practice most lattices have a much smaller kissing numberwhich is often o ( n ) . In that case, our classical algorithm runs in time . n and our quantum algorithmruns in time . n . Keywords:
Lattices · Shortest Vector Problem · Discrete Gaussian Sampling · Time-Space Tradeoff ·Quantum computation · Bounded distance decoding .
A lattice L = L ( b , . . . , b n ) := { P ni =1 z i b i : z i ∈ Z } is the set of all integer combinations of linearly indepen-dent vectors b , . . . , b n ∈ R n . We call n the rank of the lattice and ( b , . . . , b n ) a basis of the lattice.The most important computational problem on lattices is the Shortest Vector Problem ( SVP ). Given abasis for a lattice
L ⊆ R n , SVP asks us to compute a non-zero vector in L with the smallest Euclideannorm. Starting from the ’80s, the use of approximate and exact solvers for SVP (and other lattice prob-lems) gained prominence for their applications in algorithmic number theory [LLL82], convex optimiza-tion [Jr.83,Kan87,FT87], coding theory [dB89], and cryptanalysis tool [Sha84,Bri84,LO85]. The security ofmany cryptographic primitives is based on the worst-case hardnessof (a decision variant of) approximate
SVP to within polynomial factors [Ajt96,MR04,Reg09,Reg06,MR08,Gen09,BV14] in the sense that any crypt-analytic attack on these cryptosystems that runs in time polynomial in the security parameter implies apolynomial time algorithm to solve approximate
SVP to within polynomial factors. Such cryptosystemshave attracted a lot of research interest due to their conjectured resistance to quantum attacks.The
SVP is a well studied computational problem in both its exact and approximate (decision) versions.By a randomized reduction, it is known to be NP-hard to approximate within any constant factor, and hard ⋆ This research has been supported in part by the National Research Foundation Singapore under its AI SingaporeProgramme [Award Number: AISG-RP-2018-005] o approximate within a factor n c/ log log n for some c > under reasonable complexity-theoretic assump-tions [Mic98,Kho05,HR07]. For an approximation factor O ( n ) , one can solve SVP in time polynomial in n using the celebrated LLL lattice basis reduction algorithm [LLL82]. In general, the fastest known algo-rithm(s) for approximating SVP within factors polynomial in n rely on (a variant of) the BKZ lattice basisreduction algorithm [Sch87,SE94,AKS01,GN08,HPS11,ALNS19], which can be seen as a generalization ofthe LLL algorithm and gives an r n/r approximation in O ( r ) poly ( n ) time. All these algorithms internallyuse an algorithm for solving (near) exact SVP in lower-dimensional lattices. Therefore, finding faster algo-rithms to solve
SVP is critical to choosing security parameters of cryptographic primitives.As one would expect from the hardness results above, all known algorithms for solving exact
SVP ,including the ones we present here, require at least exponential time. In fact, the fastest known algorithmsalso require exponential space. There has been some recent evidence [AS18a] showing that one cannothope to get a o ( n ) time algorithm for SVP if one believes reasonable complexity theoretic conjectures suchas the (Gap) Exponential Time Hypothesis. Most of the known algorithms for
SVP can be broadly classifiedinto two classes: (i) the algorithms that require memory polynomial in n but run in time n O ( n ) and (ii) thealgorithms that require memory O ( n ) and run in time O ( n ) .The first class, initiated by Kannan [Kan87,Hel85,HS07,GNR10,MW15], combines basis reduction withexhaustive enumeration inside Euclidean balls. While enumerating vectors requires O ( n log n ) time, it ismuch more space-efficient than other kinds of algorithms for exact SVP .Another class of algorithms, and currently the fastest, is based on sieving. First developed by Ajtai,Kumar, and Sivakumar [AKS01], they generate many lattices vectors and then divide-and-sieve to createshorter and shorter vectors iteratively. A sequence of improvements [Reg04,NV08,MV10,PS09,ADRS15,AS18b],has led to a n + o ( n ) time and space algorithm by sieving the lattice vectors and carefully controlling thedistribution of output, thereby outputting a set of lattice vectors that contains the shortest vector with over-whelming probability.An alternative approach using the Voronoi cell of the lattice was proposed by Micciancio and Voul-garis [MV13] and gives a deterministic n + o ( n ) -time and n + o ( n ) -space algorithm for SVP (and many otherlattice problems).There are variants [NV08,MV10,LMV15,BDGL16] of the above mentioned sieving algorithms that, un-der some heuristic assumptions, have an asymptotically smaller (but still Θ ( n ) ) time and space complexitythan their provable counterparts. Algorithms giving a time/space tradeoff.
Even though sieving algorithms are asymptotically the fastest knownalgorithms for
SVP , the memory requirement, in high dimension, has historically been a limiting factor torun these algorithms. Some recent works [Duc18,ADH +
19] have shown how to use new tricks to make itpossible to use sieving on high-dimensional lattices in practice and benefit from their efficient running time[SVP].Nevertheless, it would be ideal and has been a long standing open question to obtain an algorithmthat achieves the “best of both worlds”, i.e. an algorithm that runs in time O ( n ) and requires memorypolynomial in n . In the absence of such an algorithm, it is desirable to have a smooth tradeoff between timeand memory requirement that interpolates between the current best sieving algorithms and the current bestenumeration algorithms.To this end, Bai, Laarhoven, and Stehl´e [BLS16] proposed the tuple sieving algorithm, providing such atradeoff based on heuristic assumptions similar in nature to prior sieving algorithms. This algorithm waslater proven to have time and space complexity k n + o ( n ) and k n/k + o ( n ) , under the same heuristic assump-tions [HK17]. One can vary the parameter k to obtain a smooth time/space tradeoff. Nevertherless, it is stilldesirable to obtain a provable variant of this algorithm that does not rely on any heuristics.Kirchner and Fouque [KF16] attempted to do this. They claim an algorithm for solving SVP in time q Θ ( n ) and in space q Θ ( n/q ) for any positive integer q > . Unfortunately, their analysis falls short of supportingtheir claimed result, and the correctness of the algorithm is not clear. We refer the reader to Section 1.3 formore details.In addition to the above, Chen, Chung, and Lai [CCL18] propose a variant of the algorithm based onDiscrete Gaussian sampling in [ADRS15]. Their algorithm runs in time . n + o ( n ) and the memory require-2ent is . n + o ( n ) . The quantum variant of their algorithm runs in time . n + o ( n ) time and has the samespace complexity. Their algorithm has the best space complexity among known provably correct algorithmsthat run in time O ( n ) .A number of works have also investigated the potential quantum speedups for lattice algorithms, and SVP in particular. A similar landscape to the classical one exists, although the quantum memory model hasits importance. While quantum enumeration algorithms only require qubits [ANS18], sieving algorithmsrequire more powerful QRAMs [LMV15,KMPM19].
We first present a new algorithm for
SVP that provides a smooth tradeoff between the time complexity andmemory requirement of
SVP without any heuristic assumptions. This algorithm is obtained by giving anew algorithm for sampling lattice vectors from the Discrete Gaussian distribution that runs in time q O ( n ) and requires q O ( n/q ) space. Theorem 1 (Time-space tradeoff for smooth discrete Gaussian, informal).
There is an algorithm that takesas input a lattice
L ⊂ R n , a positive integer q , and a parameter s above the smoothing parameter of L , and outputs q n/q samples from D L ,s using q n + o ( n ) time and poly ( q ) · q n/q space. Using the standard reduction from Bounded Distance Decoding (
BDD ) with preprocessing (where analgorithm solving the problem is allowed unlimited preprocessing time on the lattice before the algorithmreceives the target vector) to Discrete Gaussian Sampling (
DGS ) from [DRS14] and a reduction from
SVP to BDD given in [CCL18], we obtain the following.
Theorem 2 (Time-space tradeoff for
SVP ). Let n ∈ N , q ∈ [4 , √ n ] be a positive integer. Let L be the lattice ofrank n . There is a randomized algorithm that solves SVP in time q n + o ( n ) and in space poly ( n ) · q nq . If we take k = q , then the time complexity of the previous SVP algorithm becomes k . n + o ( n ) and thespace complexity poly ( n ) · k (8 n/k ) . Our tradeoff is thus the same (up to a constant in the exponents) as whatwas claimed by Kirchner and Fouque [KF16] and proven in [HK17] under heuristic assumptions .Our second result is a quantum algorithm for SVP that improves over the current fastest quantum algo-rithm for
SVP [ADRS15] (Notice that the algorithm in [ADRS15] is still the fastest classical algorithm for
SVP ). Theorem 3 (Quantum Algorithm for
SVP ). There is a quantum algorithm that solves
SVP in . n + o ( n ) timeand classical . n + o ( n ) space with an additional number of qubits polynomial in n . Our third result is a classical algorithm for
SVP that improves over the algorithm from [CCL18] andresults in the fastest classical algorithm that has a space complexity . n + o ( n ) . Theorem 4 (Algorithm for
SVP with . n + o ( n ) space). There is a classical algorithm that solves
SVP in . n + o ( n ) time and . n + o ( n ) space. The time complexity of our second and third results are obtained using a known upper bound of thekissing number which is . n . In practice most lattices have a much smaller kissing number which isoften o ( n ) . In that case, our classical algorithm runs in time . n and our quantum algorithm runs intime . n .Below in Table 1 and 2, we summarize known provable Classical and Quantum algorithms respectively.Note that all the classical algorithms are also the quantum algorithms but they don’t use any quantumpower. 3 ime Complexity Space Complexity Reference n n + o ( n ) poly ( n ) [Kan87] O ( n ) O ( n ) [AKS01] . n + o ( n ) . n + o ( n ) [PS09] n + o ( n ) n + o ( n ) [MV10] n + o ( n ) n + o ( n ) [ADRS15] . n + o ( n ) . n + o ( n ) [CCL18] . n + o ( n ) . n + o ( n ) This paper
Table 1.
Classical algorithms for Shortest vector problem.
Time Complexity Space Complexity Reference . n + o ( n ) . n + o ( n ) , QRAM [LMV15] . n + o ( n ) . n + o ( n ) [CCL18] . n + o ( n ) . n + o ( n ) This paper
Table 2.
Quantum algorithm for Shortest vector problem. [LMV15] uses the quantum RAM model. [CCL18] and ourquantum algorithm need only polynomial qubits and . n + o ( n ) classical space. Roadmap.
In the following, we give a high-level overview of our proofs in Section 1.2. We compare ourresults with the previous known algorithms that claim/conjecture a time-space tradeoff for
SVP in Sec-tion 1.3. Section 2 contain some preliminaries on lattices. The proofs of the time-space tradeoff for DiscreteGaussian sampling above the smoothing parameter and the time-space tradeoff for
SVP are given in Section3. Our classical and quantum algorithms for solving
SVP with space complexity . n + o ( n ) are presented inSection 4. Section 5 shows how the time complexity of our algorithms varies with the kissing number. We now include a high-level description of our proofs. Before describing our proof ideas, we emphasize thatit was shown in [DRS14,ADRS15] that given an algorithm for
DGS a constant factor c above the smoothingparameter, we can solve the problem of BDD where the target vector is within distance αλ ( L ) of the lattice,where the constant α < . depends on the constant c . Additionally, using [CCL18], one can enumerate alllattice points within distance qδ to a target t by querying q n times a BDD oracle with decoding distance δ (or q n/ times if we are given a quantum BDD oracle). Thus, by choosing q = ⌈ λ ( L ) /δ ⌉ and t = , analgorithm for BDD immediately gives us an algorithm for
SVP . Therefore, it suffices to give an algorithmfor
DGS above the smoothing parameter.
Time-space tradeoff for
DGS above smoothing.
Recall that efficient algorithms are known for samplingfrom a discrete Gaussian with a large enough parameter (width) [Kle00,GPV08,BLP + N = 2 n + o ( n ) vectors from the Discrete Gaussian distribution with (large)parameter s and then look for pairs of vectors whose sum is in L , or equivalently pairs of vectors thatlie in the same coset c ∈ L / L . Since there are n cosets, if we take Ω (2 n ) samples from D L ,s , almost allof the resulting vectors (except at most n vectors) will be paired and are statistically close to independentsamples from the distribution D L ,s/ √ , provided that the parameter s is sufficiently above the smoothingparameter. 4o reduce the space complexity, we modify the idea of the algorithm by generating random samples andchecking if the summation of d of those samples is in q L for some integer q . Intuitively, if we start with twolists of vectors ( L and L ) of size q O ( n/d ) from D L ,s , where s is sufficiently above the smoothing parameter,each of these vectors is contained in any coset q L + c for any c ∈ L /q L with probability roughly /q n . Wetherefore expect that the coset of a uniformly random d-combination of vectors from L is uniformly dis-tributed in L /q L . The proof of this statement follows from the Leftover Hash Lemma [ILL89]. We thereforeexpect that for any vector v ∈ L , with high probability, there is a set of d vectors x , . . . , x d in L thatsum to a vector in q L + v , and hence q (cid:16)P di =1 x i − v (cid:17) ∈ L . A lemma by Micciancio and Peikert ([MP13])shows that this vector is statistically close to a sample from the distribution, D L ,s √ d +1 /q . We can find sucha combination by trying all subsets of d vectors.We would like to repeat this and find q O ( n/d ) (nearly) independent vectors in q L . It is not immediatelyclear how to continue since, in order to guarantee independence, one would not want to reuse the al-ready used vectors x , . . . , x d and conditioned on the choice of these vectors, the distribution of the cosetscontaining the remaining vectors is disturbed and is no longer nearly uniform. By using a simple combina-torial argument, we show that even after removing any / poly ( d ) fraction of vectors from the list L , the d -combination of vectors in L has at least cq n different cosets. This is sufficient to output q O ( n/d ) indepen-dent vectors in q L with overwhelming probability. A new algorithm for
BDD with preprocessing leads to a faster quantum algorithm for
SVP . This resultimproves the quantum algorithm from [CCL18]. As mentioned above, a
BDD oracle from discrete Gaussiansampling can have a decoding distance at most . λ ( L ) , and the search space is at least n , which requiresat least n/ quantum queries. Thus, towards optimizing the algorithm for SVP , one should aim to solve α - BDD for α slightly larger than / since a larger value of α will still lead to the same running time for SVP . Using known bounds, it can be shown that such an algorithm requires . n + o ( n ) independent(preprocessed) samples from D L ,η ε ( L ) for ε = 2 − cn for some constant c .In [ADRS15], the authors gave an algorithm that runs in time n/ o ( n ) and outputs n/ o ( n ) samplesfrom D L ,s for any s ≥ √ η . ( L ) , i.e. a factor √ above the smoothing parameter). In order to obtainsamples at the smoothing parameter, we construct a dense lattice L ′ of smaller smoothing parameter than L . We then sample . n + o ( n ) vectors from D L ′ ,s and reject those that are not in L . Using the reduction fromBDD to DGS, and by repeating this algorithm, we obtain a . n + o ( n ) time and . n + o ( n ) -space algorithmto solve / - BDD with preprocessing, where each call to
BDD requires . n + o ( n ) time. Thus, the total timecomplexity of the classical algorithm is n · . n + o ( n ) , and that of the corresponding quantum algorithmis n/ · . n + o ( n ) . Covering surface of a ball by spherical caps.
As we mentioned above, one can enumerate all lattice pointswithin a qδ distance to a target t by querying q n times a BDD oracle with decoding distance δ . Our algorithmfor BDD is obtained by preparing samples from the discrete Gaussian distribution. However, note that thedecoding distance of
BDD oracle built by discrete Gaussian samples as shown in [DRS14] is successful ifthe target vector is within a radius αλ ( L ) for α < / (there is a tradeoff between α and the number of DGS samples needed), and therefore, if we choose t to be , as we do in the other algorithms mentionedabove, then q has to be at least to ensure that the shortest vector is one of the vectors output by theenumeration algorithm mentioned above. We observe here that if we choose a target t to be a randomvector “close to” but not at the origin, then the shortest vector will be within a radius δ from the target t with some probability p , and thus we can find the shortest vector by making n /p calls to the BDD oracle.An appropriate choice of the target t and the factor α gives an algorithm that runs in time n · . n + o ( n ) ,which is faster than the algorithm (running in time n . n + o ( n ) ) mentioned above.We note that the corresponding quantum algorithm runs in time n/ · . n + o ( n ) , which is significantlyslower than the quantum algorithm mentioned above. The number of samples depends on the kissing number of the lattice, we used the best known upper bound on thekissing number due to [KL78].
5e note here that the running time of this algorithm crucially depends on the kissing number of thelattice. Since a tight bound on the kissing number of a lattice is not known, the actual running time of thisalgorithm might be smaller than that promised above. For a more elaborate discussion on this, see Section 5.
Kirchner and Fouque [KF16] begin their algorithm by sampling an exponential number of vectors from thediscrete Gaussian distribution D L ,s and then using a pigeon-hole principle, show that there are two distinctsums of d vectors (for an appropriate d ) that are equal mod q L , for some large enough integer q . This resultsin a {− , , } combination of input lattice vectors (of Hamming weight at most d ) in q L ; a similar ideawas used in [BLS16] to construct their tuple sieving algorithm. In both algorithms, it is difficult to control(i) the distribution of the resulting vectors, (ii) the dependence between resulting vectors.Bai et al [BLS16] get around the above issues by making a heuristic assumption that the resulting vectorsbehave like independent samples from a “nice enough” distribution. [HK17] proved that this heuristicindeed leads to the time-memory tradeoff conjectured in [BLS16], but don’t prove correctness.Kirchner and Fouque, on the other hand, use the pigeon-hole principle to argue that there exist co-efficients α , . . . , α d ∈ {− , , } and d lattice vectors in the set of input vectors v , . . . , v d such that P di =1 α i v i q ∈ L . It is then stated that P di =1 α i v i q has a nice enough Discrete Gaussian distribution. We observethat while the resulting distribution obtained will indeed be close to a discrete Gaussian distribution, wehave no control over the parameter s of this distribution and it can be anywhere between /q and √ d/q depending on the number of nonzero coordinates in ( α , . . . , α q ) . For instance, let v , · · · , v be input vec-tors which are all from D L ,s for some large s and we want to find the collision in q L for some positiveinteger q . Suppose that we find a combination w = v + v − ( v + v ) ∈ q L and another combina-tion w = v + v − ( v + v ) ∈ q L , then by Theorem 12, one would expect that w /q ∼ D L , √ s/q and w /q ∼ D L , √ s/q . This means that the output of the exhaustive search algorithm by Kirchner and Fouquewill behave like samples taken from discrete Gaussian distributions with different parameters, making itdifficult to keep track of the standard deviation after several steps of the algorithm, and to obtain samplesfrom the Discrete Gaussian distribution at the desired parameter above the smoothing parameter. We over-come this issue by showing that there is a combination of the input vectors with a fixed Hamming weightthat is in q L as mentioned in Section 1.2.There are other technical details that were overlooked in [KF16]. In particular, one needs be carefulwith respect to the errors, both in the probability of failure and the statistical distance of the input/output.Indeed the algorithm performs an exponential number of steps, it is not enough to show that the algorithmsucceeds with “overwhelming probability” and that the output has a “negligible statistical distance” fromthe desired output. However, none of the claimed error bounds in [KF16] are proven and almost the entireproof of the Exhaustive Search (Theorem 3.4) is left to the reader. We were also unable to understand theproof of Theorem 3.6. Let N = { , , . . . , } . We use bold letters x for vectors and denote a vector’s coordinates with indices x i . Weuse log to represent the logarithm base 2 and ln to represent the natural logarithm. Throughout the paper, n will always be the dimension of the ambient space R n . Lattices. A lattice L is a discrete subgroup of R m , or equivalently the set L ( b , . . . , b n ) = ( n X i =1 x i b i : x i ∈ Z )
6f all integer combinations of n linearly independent vectors b , . . . , b n ∈ R m . Such b i ’s form a basis of L .The lattice L is said to be full-rank if n = m . We denote by λ ( L ) the first minimum of L , defined as thelength of a shortest non-zero vector of L .For a rank n lattice L ⊂ R n , the dual lattice , denoted L ∗ , is defined as the set of all points in span ( L ) thathave integer inner products with all lattice points, L ∗ = { ~w ∈ span ( L ) : ∀ ~y ∈ L , h ~w, ~y i ∈ Z } . Similarly, for a lattice basis B = ( ~b , . . . ,~b n ) , we define the dual basis B ∗ = ( ~b ∗ , . . . ,~b ∗ n ) to be the unique setof vectors in span ( L ) satisfying h ~b ∗ i ,~b j i = 1 if i = j , and , otherwise. It is easy to show that L ∗ is itself arank n lattice and B ∗ is a basis of L ∗ . Probability distributions.
Given two random variables X and Y on a set E , we denote by d SD the statisticaldistance between X and Y , which are defined by d SD ( X, Y ) = X z ∈ E (cid:12)(cid:12)(cid:12) Pr X [ X = z ] − Pr Y [ Y = z ] (cid:12)(cid:12)(cid:12) = X z ∈ E : Pr X [ X = z ] > Pr Y [ Y = z ] (cid:16) Pr X [ X = z ] − Pr Y [ Y = z ] (cid:17) . We write X is ε -close to Y to denote that the statistical distance between X and Y is at most ε . Given a finiteset E , we denote by U E a uniform random variable on E , i.e., for all x ∈ E , Pr U E [ U E = x ] = | E | . Discrete Gaussian Distribution.
For any s > , define ρ s ( x ) = exp( − π k x k /s ) for all x ∈ R n . We write ρ for ρ . For a discrete set S , we extend ρ to sets by ρ s ( S ) = P x ∈ S ρ s ( x ) . Given a lattice L , the discrete Gaussian D L ,s is the distribution over L such that the probability of a vector y ∈ L is proportional to ρ s ( y ) : Pr X ∼ D L ,s [ X = y ] = ρ s ( y ) ρ s ( L ) . The following problem plays a central role in this paper.
Definition 5.
For δ = δ ( n ) ≥ , σ a function that maps lattices to non-negative real numbers, and m = m ( n ) ∈ N , δ - DGS mσ (the Discrete Gaussian Sampling problem) is defined as follows: The input is a basis B for a lattice L ⊂ R n and a parameter s > σ ( L ) . The goal is to output a sequence of m vectors whose joint distribution is δ -close to m independent samples from D L ,s . We omit the parameter δ if δ = 0 , and the parameter m if m = 1 . We stress that δ bounds the statisticaldistance between the joint distribution of the output vectors and m independent samples from D L ,s . Weconsider the following lattice problems. Definition 6.
The search problem
SVP (Shortest Vector Problem) is defined as follows: The input is a basis B for alattice L ⊂ R n . The goal is to output a vector y ∈ L with k ~y k = λ ( L ) . Definition 7.
The search problem
CVP (Closest Vector Problem) is defined as follows: The input is a basis B for alattice L ⊂ R n and a target vector ~t ∈ R n . The goal is to output a vector ~y ∈ L with k ~y − ~t k = dist ( ~t, L ) . Definition 8.
For α = α ( n ) < / , the search problem α - BDD (Bounded Distance Decoding) is defined as follows:The input is a basis B for a lattice L ⊂ R n and a target vector ~t ∈ R n with dist ( t , L ) ≤ α · λ ( L ) . The goal is tooutput a vector ~y ∈ L with k ~y − ~t k = dist ( ~t, L ) . γ becomessmaller, α - BDD becomes more difficult as α gets larger.For convenience, when we discuss the running time of algorithms solving the above problems, we ig-nore polynomial factors in the bit-length of the individual input basis vectors (i.e. we consider only thedependence on the ambient dimension n ).For a lattice L and ε > , the smoothing parameter η ε ( L ) is the smallest s such that ρ /s ( L ∗ ) = 1 + ε . Recallthat if L is a lattice and v ∈ L then ρ s ( L + v ) = ρ s ( L ) for all s . The smoothing parameter has the followingwell-known property. Lemma 9 ([Reg09, Claim 3.8]).
For any lattice
L ⊂ R n , c ∈ R n , ε > , and s ≥ η ε ( L ) , − ε ε ≤ ρ s ( L + c ) ρ s ( L ) ≤ . Corollary 10.
Let
L ⊂ R n be a lattice, q be a positive integer, and let s ≥ η ε ( q L ) . Let C be a random coset in L /q L sampled such that Pr[ C = q L + c ] = ρ s ( q L + c ) ρ s ( L ) . Also, let U be a coset in L /q L sampled uniformly at random. Then d SD ( C, U ) ≤ ε . Proof.
By Lemma 9, we have that ρ s ( q L ) ≥ ρ s ( q L + c ) ≥ − ε ε ρ s ( q L ) , for any c ∈ L /q L and hence, ρ s ( q L + c ) ρ s ( L ) ≥ − ε ε · ρ s ( q L ) ρ s ( L ) ≥ − ε ε · q n . We conclude that d SD ( C, U ) = X c ∈L /q L : Pr[ C = c ] < Pr[ U = c ] (Pr[ U = c ] − Pr[ C = c ]) ≤ X c ∈L /q L : Pr[ C = c ] < Pr[ U = c ] Pr[ U = c ] (cid:18) − − ε ε (cid:19) ≤ X c ∈L /q L Pr[ U = c ] 2 ε ε ≤ ε ε , as needed. ⊓⊔ The following lemma gives a bound on the smoothing parameter.
Lemma 11 ([ADRS15, Lemma 2.7]).
For any lattice
L ⊂ R n , ε ∈ (0 , and k > , we have kη ε ( L ) > η ε k ( L ) Micciancio and Peikert [MP13] showed the following result about resulting distribution from the sumof many Gaussian samples.
Theorem 12 ([MP13, Theorem 3.3]).
Let L be an n dimensional lattice, z ∈ Z m a nonzero integer vector, s i ≥√ k z k ∞ · η ε ( L ) , and L + c i arbitrary cosets of L for i = 1 · · · , m . Let y i be independent vectors with distributions D L + c i ,s i , respectively. Then the distribution of y = P i z i y i is ε close to D Y,s , where Y = gcd ( z ) L + P i z i c i , and s = pP ( z i s i ) .
8e will need the following reduction from α - BDD to DGS that was shown in [DRS14].
Theorem 13 ([DRS14, Theorem 3.1],[ADRS15, Theorem 7.3]).
For any ε ∈ (0 , / , let φ ( L ) ≡ √ ln(1 /ε ) /π − o (1)2 η ε ( L ∗ ) . Then, there exists a randomized reduction from
CVP φ to . - DGS mη ε , where m = O ( n log(1 /ε ) √ ε ) and CVP φ is the prob-lem of solving CVP for target vectors that are guaranteed to be within a distance φ ( L ) of the lattice. The reductionpreserves the dimension, makes a single call to the DGS oracle, and runs in time m · poly ( n ) . We need the following relation between the first minimum of lattice and the smoothing parameter ofdual lattice. We will use this to compute the decoding distance of
BDD oracle.
Lemma 14 ([ADRS15, Lemma 6.1]).
For any lattice
L ⊂ R n , ε ∈ (0 , , we have r ln(1 /ε ) π < λ ( L ) η ε ( L ∗ ) < r β n πe · ε − /n · (1 + o (1)) , (1) and if ε ≤ ( e/β + o (1)) − n , we have r ln(1 /ε ) π < λ ( L ) η ε ( L ∗ ) < r ln(1 /ε ) + n ln β + o ( n ) π . (2) where β ( L ) ∈ [1 , . ] such that β n is the lattice kissing number. The following theorem proved in [CCL18], is required to solve
SVP by exponential number of calls to α - BDD oracle.
Theorem 15 ([CCL18, Theorem 8]).
Given a basis matrix B ⊂ R n × n for lattice L ( B ) ⊂ R n , a target vector t ∈ R n , an α - BDD oracle
BDD α with α < . , and an integer scalar p > . Let f αp : Z np → R n be f αp ( s ) = − p · BDD α ( L , ( B s − t ) /p ) + B s . If dist ( L , t ) ≤ αλ ( L ) , then the list m = { f αp ( s ) | s ∈ Z np } contains all lattice points within distance pαλ ( L ) to t . We will need the following theorems to sample the
DGS vectors with a large width.
Theorem 16 ([ADRS15],Proposition 2.17).
For any ε ≤ . , there is an algorithm that takes as input a lattice L ∈ R n , M ∈ Z > (the desired number of output vectors), and s > n log log n/ log n · η ε ( L ) , and outputs M independent samples from D L ,s in time M · poly ( n ) . Theorem 17 ([ADRS15, Theorem 5.11]).
For a lattice
L ⊂ R n , let σ ( L ) = √ η / ( L ) . Then there exists analgorithm that solves exp( − Ω ( κ )) - DGS n/ σ in time n/ κ )+ o ( n ) with space O (2 n/ ) for any κ ≥ Ω ( n ) .Moreover, if the input does not satisfy the promise, and the input parameter s < σ ( L ) = √ η / ( L ) , then thealgorithm may output M vectors for some M ≤ n/ that are exp( − Ω ( κ )) -close to M independent samples from D L ,s . Lemma 18 ([ADRS15, Lemma 5.12]).
There is a probabilistic polynomial-time algorithm that takes as input alattice
L ⊂ R n of rank n and an integer a with n/ ≤ a < n and returns a super lattice L ′ ⊃ L of index a with L ′ ⊆ L / such that for any ε ∈ (0 , , we have η ε ′ ( L ′ ) ≤ η ε ( L ) / √ with probability at least / where ε ′ := 2 ε + 2 ( n/ − a (1 + ε ) . A key element to analyze our algorithm for SVP using spherical caps is the following result, given byAggarwal and Stephens-Davidowitz [AS18a], based on a result from [CFJ13] that the probability densityfunction of the angle θ is proportional to sin n − θ . 9 heorem 19 ([AS18a], Lemma 5.6). For any integer n ≥ , let u ∈ R n be a fixed vector and t ∈ R n be auniformly random unit vector then for any < θ < θ < π , the probability that the angle between u and t isbetween θ and θ is q Z θ θ sin n − θdθ where q is some constant greater than 1. We also need some preliminaries for quantum computing as well. The following subsection is basicallytaken from Section 2.4 in [CCL18].
In this paper we use the Dirac ket-bra notation. A qubit is a unit vector in C with two (ordered) basisvectors {| i , | i} . I = (cid:20) (cid:21) , X = (cid:20) (cid:21) , Z = (cid:20) − (cid:21) , and Y = iXZ are the Pauli Matrices. A universal setof gates is H = 1 √ (cid:20) − (cid:21) , S = (cid:20) i (cid:21) , T = e iπ/ (cid:20) e − iπ/ e iπ/ (cid:21) ,CN OT = | ih | ⊗ I + | ih | ⊗ X. The Toffoli gate, a three-qubit gate, defined byToffoli | a i| b i| c i = ( | a i| b i| ⊕ c i , if a = b = 1 ; | a i| b i| c i , otherwise , for a, b, c ∈ { , } . Toffoli gate can be efficiently decomposed into CN OT, H, S, and T gates [NC16] andhence it is considered as an elementary quantum gate in this paper. It is easy to see that a NAND gatecan be implemented by a Toffoli gate: Toffoli | a i| b i| i = | a i| b i| NAND ( a, b ) i , where NAND ( a, b ) = 0 , if ( a, b ) = (1 , , and NAND ( a, b ) = 1 , otherwise. In particular, Toffoli gate together with ancilla preparationare universal for classical computation, that is for any classical function, we can implement it as (controlled)quantum one and the total quantum elementary gates will be the same order of the classical ones. Definition 20 (Search problem).
Suppose we have a set of objects named { , , . . . , N } , of which some are targets.Suppose O is an oracle that identifies the targets. The goal of a search problem is to find a target i ∈ { , , . . . , N } bymaking queries to the oracle O . In search problem, one will try to minimize the number of queries to the oracle. In classical, one need O ( N ) queries to solve such problem. Grover, on the other hand, provided a quantum algorithm, that solvesthe search problem with only O ( √ N ) queries [Gro96]. When the number of targets is unknown, Brassard et al. provided a modified Grover algorithm that solves the search problem with O ( √ N ) queries [BBHT98],which is of the same order as the query complexity of the Grover search. Moreover, D ¨urr and Høyer showedgiven an unsorted table of N values, there exists a quantum algorithm that finds the index of minimum withonly O ( √ N ) queries [DH96], with constant probability of error. Theorem 21 ([DH96], Theorem 1).
Let T [1 , , ..., N ] be an unsorted table of N items, each holding a value from anordered set. Suppose that we have a quantum oracle O T such that O T | i i| i = | i i| T [ i ] i . Then there exists a quantumalgorithm that finds the index y such that T [ y ] is the minimum with probability at least / O ( √ N ) queries to O T . .3 Probability We need the following lemma on distribution of vector inner product which directly follows from the Left-over Hash Lemma [ILL89].
Lemma 22.
Let G be a finite abelian group, and let f be a positive integer. Let Y ⊆ { , } f . Define the inner product h· , ·i : G f × Y → G by h x, y i = P i x i y i for all x ∈ G f , y ∈ Y . Let X, Y be independent and uniformly randomvariables on G f , Y , respectively. Then d SD (( h X, Y i , X ) , ( U G , X )) ≤ · s | G || Y | , where U G is uniform in G and independent of X . We will also need the Chernoff-Hoeffding bound [Hoe63].
Lemma 23.
Let X , . . . , X M be the independent and identically distributed random boolean variables of expectation p . Then for ε > , Pr " M M X i =1 X i ≤ p (1 − δ ) ≤ (cid:18) e − δ (1 − δ ) − δ (cid:19) pM . In this section, we present a new algorithm for Discrete Gaussian sampling above the smoothing parameter.
We now present the main result of this section.
Theorem 24.
Let n ∈ N , q ≥ , d ∈ [1 , n ] be positive integers, and let ε > . Let C be any positive integer. Let L bea lattice of rank n , and let s ≥ √ dη ε ( q L ) = 2 √ dqη ε ( L ) . There is an algorithm that, given N = 160 d · C · q n/d independent samples from D L ,s , outputs a list of vectors that is (4 ε d N + 11 Cq − n/ ) -close to Cq n/d independentvectors from D L , √ d +1 q s . The algorithm runs in time C · (10 e · d ) d · q n + n/d + o ( n ) and requires memory poly ( d ) · q n/d excluding the input and output memory.Proof. We prove the result for C = 1 , and the general result follows by repeating the algorithm. Let { x , . . . , x N } be the N input vectors and let { c , . . . , c N } be the corresponding cosets in L /q L . The al-gorithm does the following:1. Initialize two lists L = { x , . . . , x N } and L = { x N +1 , . . . , x N } each with N input vectors, and let Q = 0 .2. Let v be the first vector in L .3. Find d vectors (by trying all d -tuples) x i , . . . , x i d from L such that c i + · · · + c i d − v ∈ q L . If nosuch vectors exist go to step(6).4. Output the vector x i + ··· + x i d − v q ∈ L , and let Q = Q + 1 . If Q = q n/d , then END .5. Remove vectors x i , · · · , x i d from L
6. Remove vector ~v from L and repeat Steps (2) to (5).The time complexity of the algorithm is N · (cid:18) N/ d (cid:19) ≤ N (cid:18) eN d (cid:19) d ≤ (10 e · d ) d · q n + n/d + o ( n ) , ε ′ = ε d so that s ≥ √ η ε ′ ( q L ) by Lemma 11. Without loss of generality, we can assume that the vectors x i for i ∈ [ N ] are sampled by first sampling c i ∈ L /q L such that Pr[ c i = c ] = Pr[ D L ,s ∈ q L + c ] and then samplingthe vector x i according to D q L + c i ,s . Moreover, by Corollary 10, this distribution is ε ′ N -close to sampling c i for i ∈ [ N ] , independently and uniformly from L /q L , and then sampling the vectors x i according to D q L + c i ,s . We now assume that the input is sampled from this distribution.Without loss of generality, we can assume that the algorithm initially gets only the corresponding cosetsas input, and the vectors x i j ∈ q L + c i j for j ∈ [8 d ] , and v ∈ q L + c are sampled from D q L + c ij ,s and D q L + c ,s only before such a tuple is needed in Step 4 of the algorithm. Since any input vector is used onlyonce in Step 4, these samples are independent of all prior steps. This implies, by Theorem 12, that the vectorobtained in Step 4 of the algorithm is ε ′ -close to being distributed as D L ,s √ d +1 q .It remains to show that our algorithm finds q n/d vectors (with high probability). Let N ′ = N be aninteger, X be a random variable uniform over ( L /q L ) N ′ , and let Y be a random variable independent of X and uniform over vectors in { , } N ′ with Hamming weight d . The number of such vectors is (cid:18) N ′ d (cid:19) ≥ (cid:18) N ′ d (cid:19) d ≥ q n . (3)Let U be a uniformly random coset of L /q L . By Lemma 22 and (3), we have d SD (( h X, Y i , X ) , ( U, X )) ≤ · r q n q n < q − n/ , for a large enough value of n . By Markov inequality, with probability greater than − (10 · q − n/ ) over thechoice of x ← X , we have that the statistical distance between h x, Y i and U is less than q − n , which impliesfor any v ∈ L /q L , q − n + q − n > Pr[ h x, Y i = v mod q L ] > q − n − q − n . (4)We assume that the input vectors in list L satisfy (4), introducing a statistical distance of at most · q − n/ .Notice that after the algorithm found i vectors for any i < q n/d , it has removed id vectors from L . We willshow that for each vector from L (which is uniformly sampled from L /q L ) with constant probability wewill find d -vectors in Step (3).After i < q n/d output vectors have been found, there are M = N ′ − id vectors remaining in the list L .There are (cid:0) M d (cid:1) different d -combinations possible with vectors remaining in L . (cid:18) N ′ d (cid:19) / (cid:18) M d (cid:19) = N ′ · · · ( N ′ − d + 1) M · · · ( M − d + 1) < (cid:18) N ′ N ′ − d ( i + 1) (cid:19) d (cid:18) dq n/d N ′ − dq n/d (cid:19) d = (cid:18) d − (cid:19) d < since N ′ = 80 d q n/d for C = 1 (5)At the beginning of the algorithm, there are (cid:0) N ′ d (cid:1) combinations, and hence by (4), each of the q n cosetsappears at least . q − n (cid:0) N ′ d (cid:1) times. After i < q n/d output vectors have been found, there are only (cid:0) M d (cid:1) com-binations left, and (cid:0) N ′ d (cid:1) − (cid:0) M d (cid:1) possible combinations have been removed. We say that a coset c disappears ifthere is no set of d vectors in L that add to c . In order for a coset to disappear, all of the at least . q − n (cid:0) N ′ d (cid:1) combinations from the initial list must be removed. Hence, the number of cosets that disappear is at most ( N ′ d ) − ( M d ) . q − n ( N ′ d ) < / . q n = q n distinct cosets by (5). Hence with probability at least / , we find d vectors x i , . . . , x i d from L such that x i + · · · + x i d − v ∈ q L . By Chernoff-Hoeffding bound with probabilitygreater than − e − d q n/d , the algorithm finds at least q n/d vectors. In total, the statistical distance from thedesired distribution is ε ′ N + 2 ε ′ q n/d + 10 · q − n/ + e − d q n/d ≤ ε ′ N + 11 · q − n/ . orollary 25. Let n ∈ N , q ∈ [4 , √ n ] be an integer, and let ε = q − n/q . Let L be a lattice of rank n , and let s ≥ η ε ( L ) . There is an algorithm that outputs a list of vectors that is q − Ω ( n ) -close to q n/q independent vectorsfrom D L ,s . The algorithm runs in time q n + o ( n ) and requires memory poly ( n ) · q n/q .Proof. Choose d so that d − < q d , which is possible when q > , and let α = q/ √ d + 1 — this isthe ratio by which we decrease the Gaussian width in Theorem 24 — and note that α ≥ . .Let p = ⌈ √ dq ⌉ < q and k be the smallest integer such that α k · p ≥ n log log n/ log n . Thus k = O ( n log log n/ log n ) . Let g = α k ps ≥ n log log n/ log n · η ε ( L ) . By Theorem 16, in time N · poly ( n ) , we get N = (160 d ) k q n/d samples from D L ,g .We now iterate k times the algorithm from Theorem 24. Initially we have N vectors. At the beginning ofthe i -th iteration for i ≤ k − , we have N i := N · (160 d ) − i vectors that are ∆ i -close to being independentlydistributed from D L ,α − i g , where α − i g > αp · η ε ( L ) . Hence, we can apply Theorem 24 and get N i +1 = N i / d vectors that are ∆ i +1 -close to being independently distributed from D L ,α − ( i +1) g , where ∆ i +1 ∆ i + 4 ε d N i + 11(160 d ) k − i q − n/ . At each iteration we had N i ≥ d q n/d vectors, a necessary conditionto apply Theorem 24. Therefore after k iterations, we have at least N k = N / (160 d ) k = q n/d samples thatare ∆ k -close to being independently distributed from D L ,α − k g , where ∆ k q − n/ k X i =1 (160 d ) k − i + k − X i =0 ε d N i ≤ d ) k q − n/ + 4 q − n q n/d k − X i =0 (160 d ) k − i since d > q ≤ (cid:16) q − n/ + 4 q − n + n/d (cid:17) (160 d ) k +1 = q − n/ o ( n ) since (160 d ) k +1 = q o ( n ) . Any vector distributed as D L ,ps is in p L with probability at least p − n . We repeat the algorithm p n = O ( q n ) times to obtain p n · · q n/d vectors that are p n q − n/ o ( n ) = q − n/ o ( n ) close to p n · q n/d independentsamples from D L ,ps . Of these samples obtained, we only keep vectors that fall in p L and divide them by p . Let M = p n · · q n/d . By Chernoff-Hoeffding (Lemma 23) with P = p − n , and δ = , the probability toobtain less than (1 − δ ) P M = q n/d samples is at most (cid:16) e − δ (1 − δ ) − δ (cid:17) P M e − q n/d . Furthermore, d q +1616 and q ln q q is decreasing for q > , hence for q √ n , q n/d > e n ln q q > e n ln √ n n > e
16 ln √ n − o (1) = Ω ( n ) . Hence with probability greater than − e − q n/d = 1 − q − Ω ( n ) , we get q n/d vectors from the distribution D L ,s . The statistical distance from the desired distribution is q − Ω ( n ) + q − n/ o ( n ) ≤ q − n/ o ( n ) . We repeatthis for q n/q q n/d times, to get q n/q vectors. The total statistical distance from the desired distribution is q n/q q n/d · q − n/ o ( n ) ≤ q − Ω ( n ) . The total running time is bounded by q n q n/q q n/d ! poly ( n ) · N + k − X i =0 (10 ed ) d · (160 d ) k − i q n + n/d + o ( n ) ! q n + o ( n ) . The memory usage is slightly more involved: we can think of the k iterations as a pipeline with k interme-diate lists and we observe that as soon as a list (at any level) has more than d q n/q elements, we canapply Theorem 24 to produce q n/q vectors at the next level. Hence, we can ensure that at any time, eachlevel contains at most d q n/q vectors, so in total we only need to store at most k · d q n/q = poly ( n ) q n/q vectors, to which we add the memory usage of the algorithm of Theorem 24 which isbounded by poly ( n ) · q n/d poly ( n ) · q n/q . Finally, we run the filter ( p L ) on the fly at the end of the k iterations to avoid storing useless samples. ⊓⊔ q ≥ , and the running time can be bounded by c n + o ( n )1 · q c n for someconstants c and c that we have not tried to optimize. BDD and
SVP
Theorem 26.
Let n ∈ N , q ∈ [4 , √ n ] be a positive integer. Let L be a lattice of rank n . There is a randomizedalgorithm that solves . /q - BDD in time q n + o ( n ) and requires memory poly ( n ) · q n/q .Proof. Let ε = q − nq and s = η ε ( L ) . From corollary 25, there exists an algorithm that outputs q n/q vectorswhose distribution is statistically close to D L ,s in q n + o ( n ) time and poly ( n ) · q n/q space.By Theorem 13, there is a reduction from α - BDD to - DGS mη ε with m = O ( n log(1 /ε ) √ ε ) = O ( n q q n/q ) ,where the decoding coefficient α = √ log(1 /ε ) /π − o (1)2 η ε ( L ∗ ) λ ( L ) . By repeating poly ( n ) times the algorithm from Corol-lary 25, we get m vectors from D L ,η ε . By Lemma 14, we get α ( L ) = p log(1 /ε ) /π − o (1)2 η ε ( L ∗ ) λ ( L ) ≥ s log(1 /ε )2 n ( β /e ) ε − /n · (1 − o (1)) ≥ q s · e · log q β q /q ≥ (10 q ) − . Hence, we can solve . /q - BDD in time q n + o ( n ) and in space poly ( n ) · q nq . ⊓⊔ Theorem 27.
Let n ∈ N , q ∈ [4 , √ n ] be a positive integer. Let L be a lattice of rank n . There is a randomizedalgorithm that solves SVP in time q n + o ( n ) and in space poly ( n ) · q nq .Proof. By Theorem 26, we can construct a . q - BDD oracle in time q n + o ( n ) and in space poly ( n ) · q nq . Eachexecution of the BBD oracle now takes m = O ( n q q n/q ) time. By Theorem 15, with (10 q ) n queries to . q - BDD oracle, we can find the shortest vector. The total time complexity is q n + o ( n ) + n q q n/q · (10 q ) n = q n + o ( n ) . ⊓⊔ Remark 28.
If we take q = √ n , Theorem 27 gives a SVP algorithm that takes n O ( n ) time and poly ( n ) space.When q is a large enough constant, for any constant ε > , there exists a constant C = C ( ε ) > , such thatthere is a Cn time and εn space algorithm for DGS , and
SVP . In particular, the time complexity of thealgorithm in this regime is worse than the best sieving algorithms.
SVP
In this section, we present relatively space-efficient classical and quantum algorithms to find a shortestnonzero lattice vector. Our quantum algorithm is the first provable algorithm for exact-
SVP that takes lessthan O (2 n ) time. Recall that there exists an algorithm [CCL18] that, given a lattice L and a target vector t ,outputs all lattice vectors within distance pαλ ( L ) to t , by making p n calls to an α - BDD oracle. We present aquantum algorithm for
SVP that takes . n + o ( n ) time and . n + o ( n ) space with poly ( n ) qubits. We alsopresent a classical algorithm for SVP that takes . n + o ( n ) time and . n + o ( n ) space.The strategy followed by [CCL18] is to choose p = ⌈ /α ⌉ , the target vector t to be the origin, andsequentially compute the candidate vectors for SVP . There are two ways to reduce the time complexity:one can improve the
BDD oracle or reduce the number of queries. We will show how to improve bothaspects. 14 .1 Quantum algorithm for
SVP
In order to solve
SVP by the method in [CCL18], it is sufficient to use a
BDD oracle with decoding coefficient α slightly greater than / . In [CCL18], the authors use a reduction from BDD to DGS by [DRS14] and usethe Gaussian sampler of [ADRS15] to obtain many samples with standard deviation equal to √ η / . Thisallows them to construct a . − BDD but each call to the BDD oracle uses many DGS samples. This iswasteful since we really only need a / - BDD . The reason why it is so expensive is that in the analysis theyneed to find ε such that η ε > √ η / to apply the reduction, and it requires them to take ε much smallerthan would be strictly necessary to construct a / - BDD oracle; this smaller ε explains the bigger decodingradius.We obtain a BDD oracle with decoding distance . by using the same reduction but making eachcall cheaper. This is achieved by building a sampler that directly samples at the smoothing parameter,hence avoiding the √ factor, allowing us to take a bigger ε . In [ADRS15], it was shown how to constructa dense lattice L ′ whose smoothing parameter η ( L ′ ) is √ times smaller than the original lattice, and thatcontains all lattice points of the original lattice. Suppose that we first use such a dense lattice to constructa corresponding discrete Gaussian sampler with standard deviation equal to s = √ η ( L ′ ) . We then dothe rejection sampling on condition that the output is in the original lattice L . We thus have constructeda discrete Gaussian sampler of L whose standard deviation is √ η ( L ′ ) = η ( L ) . Nevertheless, | L ′ / L | willbe at least . n , which implies that this procedure needs at least . n input vectors to produce an outputvector. We use this idea to obtain the following lemma. Lemma 29.
There is an probabilistic algorithm that, given a lattice
L ⊂ R n , m ∈ Z + and s ≥ η / ( L ) as input,outputs m samples from a distribution ( m · − Ω ( n ) ) -close to D L ,s in expected time m · ( n/ o ( n ) and ( m + 2 n/ ) · o ( n ) space.Proof. Let a = n + 4 . We repeat the following until we output m vectors. We use the algorithm in Lemma18 to obtain a lattice L ′ ⊃ L of index a . We then run the algorithm from Theorem with input ( L ′ , s ) toobtain a list of vectors from L ′ . We output the vectors in this list that belong to L .By Theorem , we obtain, in time and space ( n/ o ( n ) , M = 2 n/ vectors that are − Ω ( n ) -close to M vectors independently sampled from D L ′ ,s . Also, by Lemma 18, with probability at least / , we have s ≥ η / ( L ) ≥ √ η / ( L ′ ) .From these M vectors, we will reject the vectors which are not in lattice L . It is easy to see that theprobability that a vector sampled from the distribution D L ′ ,s is in L is at least ρ s ( L ) /ρ s ( L ′ ) ≥ a usingLemma 9. Thus, the probability that we obtain at least one vector from L (which is distributed as D L ,s ) is atleast (cid:16) − (1 − / a ) n/ (cid:17) = 12 (cid:16) − (1 − / n/ ) n/ (cid:17) ≥ · (cid:16) − e − n/ / n/ (cid:17) = 12 (1 − e − / ) . It implies that after rejection of vectors, with constant probability we will get at least one vector from D L ,s . Thus, the expected number of times we need to repeat the algorithm is O ( m ) until we obtain vec-tors y , . . . , y m whose distribution is statistically close to being independently distributed from D L ,s . Thetime and space complexity is clear from the algorithm. ⊓⊔ Theorem 30.
For any sufficiently large integer n , any integer m > , and a lattice L ⊂ R n , there exists an algorithmthat creates a . - BDD oracle in . n + o ( n ) time and . n + o ( n ) space. Every call to this oracle takes . n + o ( n ) time and space.Proof. Let ε = 2 − . n , we know that η ε ( L ∗ ) > η / ( L ∗ ) for any n ≥ by the monotonicity of the smoothingparameter function. By using Lemma 29, we can sample O ( n log(1 /ε ) √ ε ) = 2 . n + o ( n ) vectors from D L ∗ ,η ε ( L ∗ ) in . n + o ( n ) time and . n + o ( n ) space. Given . n + o ( n ) samples from D L ∗ ,η ε ( L ∗ ) , by Theorem 13 and15emma 14 with β = 2 . (a known upper bound of the kissing number from [KL78]), we can call a α ( L ) - BDD oracle in time . n + o ( n ) with its decoding distance φ ( L ) = α ( L ) λ ( L ) given by φ ( L ) = p ln(1 /ε ) /π − o (1)2 η ε ( L ∗ ) > λ ( L )2 · s ln(1 /ε ) − π · o (1)ln(1 /ε ) + n ln β + o ( n ) ≥ . λ ( L ) . Therefore in . n + o ( n ) time and . n + o ( n ) space, we can sample . n + o ( n ) vectors from D L ∗ ,η ε ( L ∗ ) andrepeatedly use these samples to solve . - BDD . ⊓⊔ From [CCL18], we can enumerate all vectors of length p · . λ ( L ) by making p n calls to . - BDD oracle. Although naively searching for the minimum in the set of vectors of length less than or equal to p · . λ ( L ) , will find the origin with high probability, one can work around this issue by shifting thezero vector. Choosing an arbitrary nonzero lattice vector as the shift, we are guaranteed to obtain a vector oflength at least λ for p ≥ . Hence by combining the . - BDD oracle from Theorem 30 and the quantumminimum finding algorithm from Theorem 21, we can find the shortest vector. Note that, we can directlyuse the quantum speedup construction from [CCL18]. The following theorem is a simplified constructionfor the quantum algorithm.
Theorem 31.
For any n ≥ , there is a quantum algorithm that solves SVP in time . n + o ( n ) and classical-space . n + o ( n ) with polynomial number of qubits.Proof. Let B be a basis of the lattice, BDD . be a . - BDD oracle and let f . : Z n → L be f . ( s ) = − · BDD . ( L , ( Bs ) /
3) + Bs . The algorithm goes like this, we first use Theorem 30 to construct a quantum oracle O BDD on the first tworegisters that satisfies O BDD | i i| i = | i i|k f . ( i ) ki for all i ∈ Z n . We then construct another quantum circuit U satisfying U ( | ω i| i ) = ( | ω i| k ω ki ω = | ω i| k Be k + 1 i ω = , and apply it on the second and third registers. Here e ∈ Z n is a vector whose first coordinate is one andrest are zero. After that, we apply the quantum minimum finding algorithm on the first and third registersand get an index i ′ . The output of the algorithm will be f . ( i ′ ) .By Theorem 30, in . n + o ( n ) -time and . n + o ( n ) space, we can generate . n + o ( n ) vectors to con-struct a . - BDD oracle. Thus O BDD can be built using . n + o ( n ) Toffoli gates and poly ( n ) qubits. Tosee that we only need poly ( n ) qubits, we only keep the vectors of size smaller than exp( n ) in the constuctionof O BDD , they thus can all be stored within poly ( n ) qubits. Since the vectors are sampled from a Gaussianwith width at most exp( n ) , the error induced by throwing away the tail of the distribution is negligible.Furthermore all functions acting on these vectors can be implemented with poly ( n ) qubits.We can also construct U efficiently. Hence, the algorithm needs O (2 . n + o ( n ) ) Toffoli gates and poly ( n ) qubits for three registers. As a result by applying Theorem 21, the quantum algorithm takes . n · . n + o ( n ) =2 . n + o ( n ) time and . n + o ( n ) classical space with a polynomial number of qubits.Lastly, we show that the quantum algorithm will output a shortest non-zero vector with constant prob-ability. Since k Be k + 1 > λ ( L ) , with at least / probability one will find the index i such that f . ( i ) is a shortest nonzero vector by Theorem 21. Therefore it suffices to show that there is an index i ∈ Z n suchthat k f . ( i ) k = λ ( L ) . By Theorem 15, the list { f . ( s ) | s ∈ Z n } contains all lattice points within radius · . λ ( L ) = 1 . λ ( L ) from , including the lattice vector with length λ ( L ) . Hence with at least / probability, the algorithm outputs a non-zero shortest lattice vector. ⊓⊔ .2 Solving SVP by spherical caps on the sphere
We now explain how to reduce the number of queries to the α - BDD oracle. Consider a uniformly ran-dom target vector t such that α (1 − n ) λ ( L ) ≤ k t k < αλ ( L ) , it satisfies the condition of Theorem 15, i.e . dist( L , t ) ≤ αλ ( L ) . We enumerate all lattice vectors within distance αλ ( L ) to t and keep only the short-est nonzero one. We show that for α = 0 . , we will get the shortest nonzero vector of the lattice withprobability at least − . n + o ( n ) . By repeating this O (2 . n + o ( n ) ) times, the algorithm will succeed withconstant probability. We rely on the following construction of a . - BDD oracle.
Theorem 32.
For any dimension n ≥ , any integer m > , and a lattice L ⊂ R n , there exists an algorithm thatconstructs a . - BDD oracle in . n + o ( n ) time and . n + o ( n ) space. Each call to the oracle takes . n + o ( n ) time and space.Proof. Let ε = 2 − . n , we know that η ε ( L ∗ ) > η / ( L ∗ ) for any n ≥ by the monotonicity of thesmoothing parameter function. By using Lemma 29, we can sample O ( n log(1 /ε ) √ ε ) = 2 . n + o ( n ) vectorsfrom D L ∗ ,η ε ( L ∗ ) in . n + o ( n ) time and . n + o ( n ) space. Given . n + o ( n ) samples from D L ∗ ,η ε ( L ∗ ) , byTheorem 13 and Lemma 14, we can construct a α ( L ) - BDD oracle in time . n + o ( n ) with its decodingdistance φ ( L ) = α ( L ) λ ( L ) given by φ ( L ) = p ln(1 /ε ) /π − o (1)2 η ε ( L ∗ ) > λ ( L )2 · s ln(1 /ε ) − π · o (1)ln(1 /ε ) + n ln β + o ( n ) ≥ . λ ( L ) . Therefore in . n + o ( n ) time and . n + o ( n ) space, we can sample . n + o ( n ) vectors from D L ∗ ,η ε ( L ∗ ) andrepeatedly use these vectors to solve . - BDD . ⊓⊔ Theorem 33.
There is a randomized algorithm that solves
SVP in time . n + o ( n ) and in space . n + o ( n ) withconstant probability. Oφ P . ∗ λ . λ λ Fig. 1.
Cover the sphere by spherical caps
Proof.
On input lattice L ( B ) , use the LLL algorithm [LLL82] to get a number d (the norm of the first vectorof the basis) that satisfies λ ( L ) ≤ d ≤ n/ λ ( L ) . For i = 1 , . . . , n , let d i = d/ (1 + n ) i , and let α = 0 . .There exists a j such that λ ( L ) ≤ d j ≤ (1 + n ) λ ( L ) . We repeat the following procedure for all i = 1 , . . . , n :For j = 1 to . n + o ( n ) , pick a uniformly random vector v ij on the surface of the ball of radius α (1 − n ) d i . By Theorem 15, we can enumerate n lattice points using the function f ij : Z n → L defined by f ij ( x ) = B x − · BDD α ( L , ( B x − v ij ) / .
17t each step we only store the shortest nonzero vector. At the end, we output the shortest among them.The running time of the algorithm is straightforward. We make n queries to a α - BDD oracle that takes . n + o ( n ) time and space by Theorem 32. We further repeat this n . n + o ( n ) times. Therefore thealgorithm takes . n + o ( n ) time and . n + o ( n ) space.To prove the correctness of the algorithm, it suffices to show that there exists an i ∈ [ n ] for which thealgorithm finds the shortest vector with high probability. Recall that there exists an i such that λ ( L ) ≤ d i ≤ (1 + n ) λ ( L ) and let that index be k . We will show that for a uniformly random vector v of length α (1 − n ) d k , if we enumerate n vectors by the function f : Z n → L , f ( x ) = B x − · BDD α ( L , ( B x − v ) / , then with probability − . n − o ( n ) there exists x ∈ Z n such that f ( x ) is the shortest nonzero lattice vector.We show that we can cover the sphere of radius λ by . n + o ( n ) balls of radius αλ = 0 . ∗ λ whose centers are at distance α (1 − n ) d k ≤ . λ from the origin (see figure 1). We have two concentriccircles of radius α (1 − n ) d k and λ , and let P be a uniformly random point on the surface of the ball ofradius α (1 − n ) d k . A ball of radius αλ at center P will cover the spherical cap with angle φ of the ball ofradius λ . By the law of the cosines, we can compute φ ≈ cos − ( − α α ) and hence, by Theorem 19, if werandomly choose v , the corresponding spherical caps will cover the shortest vector with probability at least R φ sin n − θdθ ≥ − . n − o ( n ) . Besides, by Theorem 15, the list { f ( x ) | x ∈ Z n } will contain all lattice pointswithin radius αd k from v . Hence, the list will contain a shortest vector with probability − . n + o ( n ) . Byrepeating this process . n + o ( n ) times, we can find the shortest vector with constant probability. ⊓⊔ The time complexity of both algorithms in section 4 is highly affected by the lattice kissing number. Noticethat, in section 4, we used the upper bound . n on the lattice kissing number, which is possibly veryloose. In this section, we study how the time complexity of our algorithms varies with the kissing numberof the lattice. For any lattice
L ⊂ R n and d > , let N ( L , d ) denote the number of lattice vectors of length less than d . Let γ ( L ) = inf r ≥ { γ : N ( L , rλ ( L )) ≤ γ r } be the kissing number of the lattice L and γ n = sup L⊂ R n γ ( L ) be the maximal lattice kissing number indimension n . Let T ( r, n ) be the number of vectors in a n-dimensional sphere of radius r such that each pairof vectors has distance at least , and let τ n = inf r ≥ { τ : T ( r, n ) ≤ τ r } be the geometric kissing number. It is trivial to see that γ n ≤ τ ( n ) . The best known upper bound on thegeometric kissing number and the lattice kissing number from the breakthrough work of Kabatyanskii andLevenshtein [KL78] is γ n ≤ τ n < . n . For the geometric kissing number, we also know the lower bound [Cha53,Sha59,Wyn65] τ n ≥ (cid:18) (cid:19) n/ o ( n ) . . n + o ( n ) . This is the first (and only known) constructionof a family of lattices with kissing number Ω ( n ) . Given this result (and the fact that it was hard to find sucha family of lattices), one might conjecture that γ n ≪ τ n , and that γ n is c n + o ( n ) for some constant c muchsmaller than . , or perhaps even much smaller than p / . Moreover, for most rank n lattices L , it isnot unreasonable to conjecture that γ ( L ) = 2 o ( n ) . In view of the fact that γ ( L ) can be anywhere between o ( n ) and . n + o ( n ) , in the following subsections we study the dependence of the time complexity of ouralgorithms on γ ( L ) . First we discuss the correlation between the kissing number γ ( L ) and the time complexity of the sphericalcapping algorithm. Note that lemma 14 provides two different upper-bounds on η ε ( L ∗ ) λ ( L ) : inequality (1)is always true while for small enough ε , the better inequality (2) holds. Suppose we know how to constructan α -BDD oracle. By taking p = 2 in Theorem 15, we can enumerate all lattice points in a ball of radius pαλ = 2 αλ (centered on any point at distance less than αλ from the origin) using n calls to an α -BDDoracle. Since we are trying to cover the sphere of radius λ , we need to take α > / . By Theorem 13, we canconstruct such an oracle by reducing CVP αλ to 0.5- DGS mη ε where m = O ( n log(1 /ε ) √ ε ) , and each subsequentcall to the oracle costs m · poly ( n ) . The constructed oracle has decoding distance αλ = √ ln(1 /ε ) /π − o (1)2 η ε ( L ∗ ) whichwe do not know exactly but investigate below. We can solve 0.5- DGS mη ε by Lemma 29 in time m n/ o ( n ) which is negligible compared to the overall cost of the algorithm. Write ε = 2 − An , so that m = 2 An/ o ( n ) .Let β = γ ( L ) /n and b = log β = n log γ ( L ) . By using inequality (2), only valid when ε ≤ ( e/β + o (1)) − n ,in lemma 14, we have that λ ( L ) η ε ( L ∗ ) < r ln(1 /ε ) + n ln β + o ( n ) π . Hence we can guarantee that α = p ln(1 /ε ) /π − o (1)2 η ε ( L ∗ ) λ > p ln(1 /ε ) /π − o (1)2 q ln(1 /ε )+ n ln β + o ( n ) π = 12 s ln(1 /ε ) + o (1)ln(1 /ε ) + n ln β = 12 r AA + b . To guarantee that α > / , we always choose A so that q AA + b > / , that is A > b . Furthermore, asnoted above, this inequality is only valid when ε ≤ ( e/β + o (1)) − n , that is A >
12 ln 2 − b . As explained inSection 4.2, each such ball of radius αλ covers a spherical cap of angle φ (see Figure 1) and we need tocover the ball of radius λ (centered on the origin) by such spherical caps. In Section 4.2, we chose to putthe center of those spherical caps on a sphere of radius (approximately) αλ , the reason being that this isthe maximum distance that we can choose since Theorem 15 requires the center to be at distance at most αλ from the lattice. However, when β becomes smaller, α increases and it becomes more interesting to takethe center closer to the origin. We can calculate the optimal choice by noting that if we take the center of thecaps to be at distance r (instead of αλ ) then the angle φ satisfies cos φ = λ + r − α λ rλ by the law of cosines.We want to maximize the angle φ , since the area we can cover increases with φ . For minimizing cos( φ ) , wefind that the optimal radius is r = √ − α λ . Hence in general, we choose to put the center on a sphereof radius r = min( α, √ − α ) λ , since we cannot take the center to be at a distance of more than αλ .Now observe that the surface area of any such cap is lower bounded by the surface area of the baseof the cap, which is a ( n − -dimensional sphere of radius λ sin φ . Hence the number of spherical capsrequired to cover the surface of sphere is in the order of A n ( λ ) /A n − ( λ sin φ ) where A n is the surface areaof a n -dimensional sphere. A n ( λ ) = 2 π n/ λ n − Γ ( n/ , A n − ( λ sin φ ) = 2 π ( n − / λ n − sin n − φΓ (( n − / . A n ( λ ) A n − ( λ sin φ ) = 1sin n − φ · o ( n ) . Therefore, an upper bound of the total time complexity of our method can be expressed as T ( A, b ) = 2 n · An/ · o ( n ) sin n − φ where β n = 2 bn is the kissing number of the lattice. Let the optimal time complexity f ( b ) = 2 c ( b ) n be theminimum of T ( · , b ) over α min ( b ) A , where A min ( b ) is the smallest value of A that we can take for a given β = 2 b , which is A min ( b ) = max( b,
12 ln 2 − b ) Then c ( b ) is given by c ( b ) = min A min ( b ) A log T ( A, b ) n . The blue curve in Figure 2, computed using numerical methods, describes the relation between the (log ofthe) time complexity and β = 2 b , where b ranges from to 0.402. . . . . . . . . . . . b c ( b ) Classical spherical capping with inequality (2)Classical spherical capping with inequality (1)Classical version of our quantum algorithm in Section 4.1
Fig. 2. (Log of the) time complexity of the classical version of the quantum algorithm in Section 4.1 and the sphericalcapping algorithm versus the kissing number β = 2 b The inequality (2) in lemma 14 tells us that if we take an extremely small ε to compute the BDD oracle,we can find a
BDD oracle with α ( L ) almost / . However the time complexity for each call of the oraclewill be very costly. On the other hand, if we use the inequality (1) in lemma 14 with a larger ε , each call ofthe oracle will take much less time, but the constraint on the decoding coefficient α will be different. Moreprecisely we can guarantee that α ( L ) = p ln(1 /ε ) /π η ε ( L ∗ ) λ ( L ) > · s e ln ε n · β − ε n = 12 β · − A √ A · √ e ln 2 , In that case, the only difference with the previous case is that we need to choose A so that β · − A √ A ·√ e ln 2 > / . The red curve in Figure 2, computed using numerical methods, describes the relation be-tween the (log of the) time complexity and β = 2 b , where b ranges from to 0.402.20e would also like to compare our spherical capping algorithm to the classical version of our quantumalgorithm in Section 4.1. Note that this classical algorithm is basically the same as in [CCL18], but with a / - BDD of which each call is less time consuming than the . - BDD oracle used in [CCL18]. The timecomplexity of this classical algorithm is n · An/ , where ε = 2 − An is chosen so that we can constuct a / -BDD oracle, using either inequality (1) or (2) in Lemma 14.If we use inequality (2), then α ( L ) > r AA + b . Since our algorithm requires that α ≥ / , we choose ε such that r AA + b = 1 / which means A = b . But the inequality (2) is only valid when ε ≤ ( e/β + o (1)) − n , that is A >
12 ln 2 − b .We thus get b ≥ . , which makes inequality (2) not helpful for smaller b .Hence we use the general bound in lemma 14 here. By applying inequality (1), our decoding coefficient α verifies α ( L ) > β · − A √ A · √ e ln 2 . Since our algorithm requires that α ≥ / , we choose ε such that β · − A √ A · √ e ln 2 = 1 / Let the time complexity of the quantum algorithm be cn and let the kissing number β n = 2 bn where b ∈ [0 , . . We know that c = log (3) + A/ , hence the relation between c and b is p c − (3) · − c +2 log (3) = 13 · r e ln 2 · b . This relation is shown as the green curve in the Figure 2.Notice that, we can always achieve a time complexity that is minimal of the three curves. We specifytwo special cases of the kissing number: – If γ ( L ) = 2 . n : we have a . n time classical algorithm using the spherical capping algorithm withinequality (2). – If γ ( L ) = 2 o ( n ) : we have a . n time classical algorithm using the spherical capping algorithm withinequality (1). For the quantum algorithm.
The analyse of our quantum algorithm in Section 4.1 is exactly the same as itsclassical version that we analysed above, except that the time complexity is now n/ · An/ . We presentthis time complexity as the green curve in Figure 3.We can also apply Grover’s algorithm [Gro96] to speed up our spherical capping algorithm. The totalcomplexity can be expressed as T ( A, b ) = 2 n/ · An/ · o ( n ) sin ( n − / φ . Both / factors in the exponent of the first and the third term come from Grover’s algorithm. We alsoanalyse how this complexity varies with b according to both inequalities in Lemma 14. They are shown asthe blue and the red curves in Figure 3.Although when b = 0 . , our quantum algorithm in Section 4.1 outperforms the quantum sphericalcapping algorithm, when b is smaller, it’s the other way around. We specify two special cases of the kissingnumber: 21 . . . . . . . b c ( b ) Quantum spherical capping with inequality (2)Quantum spherical capping with inequality (1)Our quantum algorithm in Section 4.1
Fig. 3. (Log of the) time complexity of the quantum algorithm in Section 4.1 and the quantum spherical capping algo-rithm versus the kissing number β = 2 b – If γ ( L ) = 2 . n : we have a . n time quantum algorithm using the method in Section 4.1. – if γ ( L ) = 2 o ( n ) : we have a . n time quantum algorithm using the quantum spherical capping algo-rithm with inequality (1). Acknowledgments
We would like to thank Pierre-Alain Fouque, Paul Kirchner, Amaury Pouly and Noah Stephens-Davidowitzfor useful comments and suggestions.
References
ADH +
19. Martin R Albrecht, L´eo Ducas, Gottfried Herold, Elena Kirshanova, Eamonn W Postlethwaite, and MarcStevens. The general sieve kernel and new records in lattice reduction. In
Annual International Conference onthe Theory and Applications of Cryptographic Techniques , pages 717–746. Springer, 2019. 2ADRS15. Divesh Aggarwal, Daniel Dadush, Oded Regev, and Noah Stephens-Davidowitz. Solving the shortest vectorproblem in 2 n time using discrete gaussian sampling: Extended abstract. In Proceedings of the Forty-SeventhAnnual ACM on Symposium on Theory of Computing, STOC 2015, Portland, OR, USA, June 14-17, 2015 , pages733–742, 2015. 1, 2, 3, 4, 5, 8, 9, 15Ajt96. Mikl´os Ajtai. Generating hard instances of lattice problems (extended abstract). In
Proceedings of the Twenty-Eighth Annual ACM Symposium on the Theory of Computing, Philadelphia, Pennsylvania, USA, May 22-24, 1996 ,pages 99–108, 1996. 1AKS01. Mikl´os Ajtai, Ravi Kumar, and D. Sivakumar. A sieve algorithm for the shortest lattice vector problem. In
Proceedings of the Thirty-third Annual ACM Symposium on Theory of Computing , STOC ’01, pages 601–610, NewYork, NY, USA, 2001. ACM. 2, 4ALNS19. Divesh Aggarwal, Jianwei Li, Phong Q. Nguyen, and Noah Stephens-Davidowitz. Slide reduction, revisited- filling the gaps in SVP approximation.
CoRR , abs/1908.03724, 2019. 2ANS18. Yoshinori Aono, Phong Q. Nguyen, and Yixin Shen. Quantum lattice enumeration and tweaking discretepruning. In Thomas Peyrin and Steven Galbraith, editors,
Advances in Cryptology – ASIACRYPT 2018 , pages405–434, Cham, 2018. Springer International Publishing. 3AS18a. Divesh Aggarwal and Noah Stephens-Davidowitz. (gap/s) eth hardness of svp. In
Proceedings of the 50thAnnual ACM SIGACT Symposium on Theory of Computing , pages 228–238, 2018. 2, 9, 10 S18b. Divesh Aggarwal and Noah Stephens-Davidowitz. Just take the average! an embarrassingly simple 2ˆn-time algorithm for SVP (and CVP). In , pages 12:1–12:19, 2018. 2BBHT98. Michel Boyer, Gilles Brassard, Peter Høyer, and Alain Tapp. Tight bounds on quantum searching.
Fortschritteder Physik: Progress of Physics , 46(4-5):493–505, 1998. 10BDGL16. Anja Becker, L´eo Ducas, Nicolas Gama, and Thijs Laarhoven. New directions in nearest neighbor searchingwith applications to lattice sieving. In
Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium onDiscrete Algorithms, SODA 2016, Arlington, VA, USA, January 10-12, 2016 , pages 10–24, 2016. 2BLP +
13. Zvika Brakerski, Adeline Langlois, Chris Peikert, Oded Regev, and Damien Stehl´e. Classical hardness oflearning with errors. In
Proceedings of the forty-fifth annual ACM symposium on Theory of computing , pages575–584. ACM, 2013. 4BLS16. Shi Bai, Thijs Laarhoven, and Damien Stehl´e. Tuple lattice sieving.
IACR Cryptology ePrint Archive , 2016:713,2016. 2, 6Bri84. Ernest F. Brickell. Breaking iterated knapsacks. In
Advances in Cryptology, Proceedings of CRYPTO ’84, SantaBarbara, California, USA, August 19-22, 1984, Proceedings , pages 342–358, 1984. 1BV14. Zvika Brakerski and Vinod Vaikuntanathan. Lattice-based FHE as secure as PKE. In
Innovations in TheoreticalComputer Science, ITCS’14, Princeton, NJ, USA, January 12-14, 2014 , pages 1–12, 2014. 1CCL18. Yanlin Chen, Kai-Min Chung, and Ching-Yi Lai. Space-efficient classical and quantum algorithms for theshortest vector problem.
Quantum Information & Computation , 18(3&4):285–306, 2018. 1, 2, 3, 4, 5, 9, 10, 14,15, 16, 21CFJ13. Tony Cai, Jianqing Fan, and Tiefeng Jiang. Distributions of angles in random packing on spheres.
The Journalof Machine Learning Research , 14(1):1837–1864, 2013. 9Cha53. C Chabauty. Resultats sur lempilement de calottes egales sur une perisphere de rn et correction a un travailanterieur.
COMPTES RENDUS HEBDOMADAIRES DES SEANCES DE L ACADEMIE DES SCIENCES ,236(15):1462–1464, 1953. 18dB89. Rudi de Buda. Some optimal codes have structure.
IEEE Journal on Selected Areas in Communications , 7(6):893–899, 1989. 1DH96. Christoph D ¨urr and Peter Høyer. A quantum algorithm for finding the minimum.
CoRR , quant-ph/9607014,1996. 10DRS14. Daniel Dadush, Oded Regev, and Noah Stephens-Davidowitz. On the closest vector problem with a distanceguarantee. In
IEEE 29th Conference on Computational Complexity, CCC 2014, Vancouver, BC, Canada, June 11-13,2014 , pages 98–109, 2014. 3, 4, 5, 9, 15Duc18. L´eo Ducas. Shortest vector from lattice sieving: A few dimensions for free. In Jesper Buus Nielsen andVincent Rijmen, editors,
Advances in Cryptology - EUROCRYPT 2018 - 37th Annual International Conference onthe Theory and Applications of Cryptographic Techniques, Tel Aviv, Israel, April 29 - May 3, 2018 Proceedings, PartI , volume 10820 of
Lecture Notes in Computer Science , pages 125–145. Springer, 2018. 2FT87. Andr´as Frank and ´Eva Tardos. An application of simultaneous diophantine approximation in combinatorialoptimization.
Combinatorica , 7(1):49–65, 1987. 1Gen09. Craig Gentry. Fully homomorphic encryption using ideal lattices. In
Proceedings of the 41st Annual ACMSymposium on Theory of Computing, STOC 2009, Bethesda, MD, USA, May 31 - June 2, 2009 , pages 169–178,2009. 1GN08. Nicolas Gama and Phong Q. Nguyen. Finding short lattice vectors within mordell’s inequality. In
Proceed-ings of the 40th Annual ACM Symposium on Theory of Computing, Victoria, British Columbia, Canada, May 17-20,2008 , pages 207–216, 2008. 2GNR10. Nicolas Gama, Phong Q. Nguyen, and Oded Regev. Lattice enumeration using extreme pruning. InHenri Gilbert, editor,
Advances in Cryptology – EUROCRYPT 2010 , pages 257–278, Berlin, Heidelberg, 2010.Springer Berlin Heidelberg. 2GPV08. Craig Gentry, Chris Peikert, and Vinod Vaikuntanathan. Trapdoors for hard lattices and new cryptographicconstructions. In
Proceedings of the fortieth annual ACM symposium on Theory of computing , pages 197–206.ACM, 2008. 4Gro96. Lov K. Grover. A fast quantum mechanical algorithm for database search. In
Proceedings of the Twenty-EighthAnnual ACM Symposium on the Theory of Computing, Philadelphia, Pennsylvania, USA, May 22-24, 1996 , pages212–219, 1996. 10, 21Hel85. Bettina Helfrich. Algorithms to construct minkowski reduced and hermite reduced lattice bases.
Theor.Comput. Sci. , 41(2–3):125–139, December 1985. 2HK17. Gottfried Herold and Elena Kirshanova. Improved algorithms for the approximate k-list problem in eu-clidean norm. In Serge Fehr, editor,
Public-Key Cryptography – PKC 2017 , pages 16–40, Berlin, Heidelberg,2017. Springer Berlin Heidelberg. 2, 3, 6 oe63. Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the AmericanStatistical Association , 58(301):13–30, 1963. 11HPS11. Guillaume Hanrot, Xavier Pujol, and Damien Stehl´e. Analyzing blockwise lattice algorithms using dynam-ical systems. In Phillip Rogaway, editor,
Advances in Cryptology – CRYPTO 2011 , pages 447–464, Berlin,Heidelberg, 2011. Springer Berlin Heidelberg. 2HR07. Ishay Haviv and Oded Regev. Tensor-based hardness of the shortest vector problem to within almost poly-nomial factors. In
Proceedings of the thirty-ninth annual ACM symposium on Theory of computing , pages 469–477,2007. 2HS07. Guillaume Hanrot and Damien Stehl´e. Improved analysis of kannan’s shortest lattice vector algorithm. In
Advances in Cryptology - CRYPTO 2007, 27th Annual International Cryptology Conference, Santa Barbara, CA,USA, August 19-23, 2007, Proceedings , pages 170–186, 2007. 2ILL89. Russell Impagliazzo, Leonid A Levin, and Michael Luby. Pseudo-random generation from one-way func-tions. In
Proceedings of the twenty-first annual ACM symposium on Theory of computing , pages 12–24, 1989. 5,11Jr.83. Hendrik W. Lenstra Jr. Integer programming with a fixed number of variables.
Math. Oper. Res. , 8(4):538–548,1983. 1Kan87. Ravi Kannan. Minkowski’s convex body theorem and integer programming.
Math. Oper. Res. , 12(3):415–440,1987. 1, 2, 4KF16. Paul Kirchner and Pierre-Alain Fouque. Time-memory trade-off for lattice enumeration in a ball. Cryptol-ogy ePrint Archive, Report 2016/222, 2016. https://eprint.iacr.org/2016/222 . 2, 3, 6Kho05. Subhash Khot. Hardness of approximating the shortest vector problem in lattices.
J. ACM , 52(5):789–808,2005. 2KL78. Grigorii Anatol’evich Kabatiansky and Vladimir Iosifovich Levenshtein. On bounds for packings on asphere and in space.
Problemy Peredachi Informatsii , 14(1):3–25, 1978. 5, 16, 18Kle00. Philip Klein. Finding the closest lattice vector when it’s unusually close. In
Proceedings of the Eleventh AnnualACM-SIAM Symposium on Discrete Algorithms , SODA ’00, page 937–941, USA, 2000. Society for Industrialand Applied Mathematics. 4KMPM19. Elena Kirshanova, Erik M˚artensson, Eamonn W Postlethwaite, and Subhayan Roy Moulik. Quantum algo-rithms for the approximate k-list problem and their application to lattice sieving. In
International Conferenceon the Theory and Application of Cryptology and Information Security , pages 521–551. Springer, 2019. 3LLL82. A.K. Lenstra, H.W. Lenstra, and L´aszlo Lov´asz. Factoring polynomials with rational coefficients.
Math.Ann. , 261:515–534, 1982. 1, 2, 17LMV15. Thijs Laarhoven, Michele Mosca, and Joop Van De Pol. Finding shortest lattice vectors faster using quantumsearch.
Designs, Codes and Cryptography , 77, 12 2015. 2, 3, 4LO85. J. C. Lagarias and Andrew M. Odlyzko. Solving low-density subset sum problems.
J. ACM , 32(1):229–246,1985. 1Mic98. Daniele Micciancio. The shortest vector in a lattice is hard to approximate to within some constant. In
Proceedings of the 39th Annual Symposium on Foundations of Computer Science , FOCS ’98, page 92, USA, 1998.IEEE Computer Society. 2MP13. Daniele Micciancio and Chris Peikert. Hardness of SIS and LWE with small parameters. In
Advances inCryptology - CRYPTO 2013 - 33rd Annual Cryptology Conference, Santa Barbara, CA, USA, August 18-22, 2013.Proceedings, Part I , pages 21–39, 2013. 5, 8MR04. Daniele Micciancio and Oded Regev. Worst-case to average-case reductions based on gaussian measures. In ,pages 372–381, 2004. 1MR08. Daniele Micciancio and Oded Regev. Lattice-based cryptography, 2008. 1MV10. Daniele Micciancio and Panagiotis Voulgaris. Faster exponential time algorithms for the shortest vectorproblem. In
Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, SODA2010, Austin, Texas, USA, January 17-19, 2010 , pages 1468–1480, 2010. 2, 4MV13. Daniele Micciancio and Panagiotis Voulgaris. A deterministic single exponential time algorithm for mostlattice problems based on voronoi cell computations.
SIAM J. Comput. , 42(3):1364–1391, 2013. 2MW15. Daniele Micciancio and Michael Walter. Fast lattice point enumeration with minimal overhead. In
Proceed-ings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA,USA, January 4-6, 2015 , pages 276–294, 2015. 2NC16. Michael A. Nielsen and Isaac L. Chuang.
Quantum Computation and Quantum Information (10th Anniversaryedition) . Cambridge University Press, 2016. 10 V08. Phong Q. Nguyen and Thomas Vidick. Sieve algorithms for the shortest vector problem are practical.
J.Mathematical Cryptology , 2(2):181–207, 2008. 2PS09. Xavier Pujol and Damien Stehl´e. Solving the shortest lattice vector problem in time 22.465n.
IACR CryptologyePrint Archive , 2009:605, 2009. 2, 4Reg04. Oded Regev. Lattices in computer science, lecture 8, Fall 2004. 2Reg06. Oded Regev. Lattice-based cryptography. In
Advances in Cryptology - CRYPTO 2006, 26th Annual InternationalCryptology Conference, Santa Barbara, California, USA, August 20-24, 2006, Proceedings , pages 131–141, 2006. 1Reg09. Oded Regev. On lattices, learning with errors, random linear codes, and cryptography.
J. ACM , 56(6):34:1–34:40, September 2009. 1, 8Sch87. Claus Peter Schnorr. A hierarchy of polynomial time lattice basis reduction algorithms.
Theor. Comput. Sci. ,53:201–224, 1987. 2SE94. Claus-Peter Schnorr and M. Euchner. Lattice basis reduction: Improved practical algorithms and solvingsubset sum problems.
Math. Program. , 66:181–199, 1994. 2Sha59. Claude E Shannon. Probability of error for optimal codes in a gaussian channel.
Bell System Technical Journal ,38(3):611–656, 1959. 18Sha84. Adi Shamir. A polynomial-time algorithm for breaking the basic merkle-hellman cryptosystem.
IEEE Trans.Information Theory , 30(5):699–704, 1984. 1SVP. SVP Challenges. . 2Vl˘a19. Serge Vl˘adut¸. Lattices with exponentially large kissing numbers.
Moscow Journal of Combinatorics and NumberTheory , 8(2):163–177, 2019. 19Wyn65. Aaron D Wyner. Capabilities of bounded discrepancy decoding.
Bell System Technical Journal , 44(6):1061–1122, 1965. 18, 44(6):1061–1122, 1965. 18