On the Lattice Distortion Problem
aa r X i v : . [ c s . D S ] O c t On the Lattice Distortion Problem
Huck Bennett ∗ Daniel Dadush † Noah Stephens-Davidowitz ∗‡ August 20, 2018
Abstract
We introduce and study the
Lattice Distortion Problem (LDP). LDP asks how “similar” twolattices are. I.e., what is the minimal distortion of a linear bijection between the two lattices?LDP generalizes the Lattice Isomorphism Problem (the lattice analogue of Graph Isomorphism),which simply asks whether the minimal distortion is one.As our first contribution, we show that the distortion between any two lattices is ap-proximated up to a n O (log n ) factor by a simple function of their successive minima. Ourmethods are constructive, allowing us to compute low-distortion mappings that are within a2 O ( n log log n/ log n ) factor of optimal in polynomial time and within a n O (log n ) factor of optimalin singly exponential time. Our algorithms rely on a notion of basis reduction introduced bySeysen (Combinatorica 1993), which we show is intimately related to lattice distortion. Lastly,we show that LDP is NP-hard to approximate to within any constant factor (under randomizedreductions), by a reduction from the Shortest Vector Problem. An n -dimensional lattice L ⊂ R n is the set of all integer linear combinations of linearly independentvectors B = [ b , . . . , b n ] with b i ∈ R n . We write the lattice generated by basis B as L ( B ) = { P ni =1 a i b i : a i ∈ Z } .Lattices are very well-studied classical mathematical objects (e.g., [Min10, CS98]), and overthe past few decades, computational problems on lattices have found a remarkably large numberof applications in computer science. Algorithms for lattice problems have proven to be quiteuseful, and they have therefore been studied extensively (e.g., [LLL82, Kan87, AKS01, MV13]).And, over the past twenty years, many strong cryptographic primitives have been constructedwith their security based on the (worst-case) hardness of various computational lattice problems(e.g., [Ajt96, MR07, GPV08, Gen09, Reg09, BV14]).In this paper, we address a natural question: how “similar” are two lattices? I.e., given lattices L , L , does there exist a linear bijective mapping T : L → L that does not change the distancesbetween points by much? If we insist that T exactly preserves distances, then this is the Lattice ∗ Department of Computer Science, Courant Institute of Mathematical Sciences, New York University. Email: [email protected], [email protected] . † Centrum Wiskunde & Informatica, Amsterdam. Email: [email protected] . Supported by the NWO Veni grant639.071.510. ‡ This material is based upon work partially supported by the National Science Foundation under Grant No. CCF-1320188. Any opinions, findings, and conclusions or recommendations expressed in this material are those of theauthors and do not necessarily reflect the views of the National Science Foundation. somorphism Problem (LIP), which was studied in [PS97, SSV09, HR14, LS14]. We extend this tothe Lattice Distortion Problem (LDP), which asks how well such a mapping T can approximately preserve distances between points.Given two lattices L , L , we define the distortion between them as D ( L , L ) = min {k T kk T − k : T ( L ) = L } ,where k T k = sup k x k =1 k T x k is the operator norm . The quantity κ ( T ) = k T k·k T − k is the conditionnumber of T , which measures how much T “distorts distances” (up to a fixed scaling). It is easyto check that D ( L , L ) bounds the ratio between most natural geometric parameters of L and L (up to scaling), and hence D ( L , L ) is a strong measure of “similarity” between lattices. Inparticular, D ( L , L ) = 1 if and only if L , L are isomorphic (i.e., if and only if they are relatedby a scaled orthogonal transformation).The Lattice Distortion Problem (LDP) is then defined in the natural way as follows. The inputis two n -dimensional lattices L , L (each represented by a basis), and the goal is to compute abijective linear transformation T mapping L to L such that κ ( T ) = D ( L , L ). In this work, westudy the approximate search and decisional versions of this problem, defined in the usual way.We refer to them as γ -LDP and γ -GapLDP respectively, where γ = γ ( n ) ≥ As our first main contribution, we show that the distortion between any two lattices can be ap-proximated by a natural function of geometric lattice parameters. Indeed, our proof techniques areconstructive, leading to our second main contribution: an algorithm that computes low-distortionmappings, with a trade-off between the running time and the approximation factor. Finally, weshow hardness of approximating lattice distortion.To derive useful bounds on the distortion between two lattices, it is intuitively clear that oneshould study the “different scales over which the two lattices live.” A natural notion of this is givenby the successive minima, which are defined as follows. The i th successive minimum, λ i ( L ), of L isthe minimum radius r > L contains i linearly independent vectors of norm at most r .For example, a lattice generated by a basis of orthogonal vectors of lengths 0 < a ≤ · · · ≤ a n hassuccessive minima λ i ( L ) = a i . Since low-distortion mappings approximately preserve distances,it is intuitively clear that two lattices can only be related by a low-distortion mapping if theirsuccessive minima are close to each other (up to a fixed scaling).Concretely, for two n -dimensional lattices L , L , we define M ( L , L ) = max i ∈ [ n ] λ i ( L ) λ i ( L ) , (1)which measures how much we need to scale up L so that its successive minima are at least as largeas those of L . For any linear map T from L to L , it is easy to see that λ i ( L ) ≤ k T k λ i ( L ).Thus, by definition M ( L , L ) ≤ k T k . Applying the same reasoning for T − , we derive the followingsimple lower bound on distortion. D ( L , L ) ≥ M ( L , L ) · M ( L , L ) . (2)We note that this lower bound is tight when L , L are each generated by bases of orthogonalvectors. But, it is a priori unclear if any comparable upper bound should hold for general lattices,2ince the successive minima are a very “coarse” characterization of the geometry of the lattice.Nevertheless, we show a corresponding upper bound. Theorem 1.1.
Let L , L be n -dimensional lattices. Then, M ( L , L ) · M ( L , L ) ≤ D ( L , L ) ≤ n O (log n ) · M ( L , L ) · M ( L , L ) . In particular, Theorem 1.1, together with standard transference theorems (e.g., [Ban93]), impliesthat n O (log n ) -GapLDP is in NP ∩ coNP . While the factor on the right-hand side of the theorem mightbe far from optimal, we show in Section 5.1 that it cannot be improved below Ω( √ n ). Intuitively,this is because there exist lattices that are much more dense than Z n over large scales but still have λ i ( L ) = Θ(1) for all i . I.e., there exist very dense lattice sphere packings (see, e.g., [Sie45]).To prove the above theorem, we make use of the intuition that a low-distortion mapping T from L to L should map a “short” basis B of L to a “short” basis B of L . (Note that the condition T B = B completely determines T = B B − .) The difficulty here is that standard notions of“short” fail for the purpose of capturing low-distortion mappings. In particular, in Section 5.2,we show that Hermite-Korkine-Zolotarev (HKZ) reduced bases, one of the strongest notions of“shortest possible” lattice bases, do not suffice by themselves for building low-distortion mappings.(See Section 2.6 for the definition of HKZ-reduced bases.) In particular, we give a simple exampleof a lattice L where an HKZ-reduced basis of L misses the optimal distortion D ( Z n , L ) by anexponential factor.Fortunately, we show that a suitable notion of shortness does exist for building low-distortionmappings by making a novel connection between low-distortion mappings and a notion of basisreduction introduced by Seysen [Sey93]. In particular, for a basis B = [ b , . . . , b n ] and dual basis B ∗ = B − T = [ b ∗ , . . . , b ∗ n ], Seysen’s condition number is defined as S ( B ) = max i ∈ [ n ] k b i kk b ∗ i k .Note that we always have h b i , b ∗ i i = 1, so this parameter measures how tight the Cauchy-Schwarzinequality is over all primal-dual basis-vector pairs. We extend this notion and define S ( L ) asthe minimum of S ( B ) over all bases B of L . Using this notion, we give an effective version ofTheorem 1.1 as follows. Theorem 1.2.
Let L , L be n -dimensional lattices. Let B , B ∈ R n × n be bases of L , L whosecolumns are sorted in non-decreasing order of length. Then, we have that M ( L , L ) M ( L , L ) ≤ κ ( B B − ) ≤ n S ( B ) S ( B ) · M ( L , L ) M ( L , L ) .In particular, we have that M ( L , L ) M ( L , L ) ≤ D ( L , L ) ≤ n S ( L ) S ( L ) · M ( L , L ) M ( L , L ) . From here, the bound in Theorem 1.1 follows directly from the following (surprising) theoremof Seysen.
Theorem 1.3 (Seysen [Sey93]) . For any
L ⊂ R n , S ( L ) ≤ n O (log n ) . M ( L , L ) and M ( L , L ). But, Seysen’s proofof the above theorem is actually constructive! In particular, he shows how to efficiently convert anysuitably reduced lattice basis into a basis with a low Seysen condition number. (See Section 2.6.2for details.) Using this methodology, combined with standard basis reduction techniques, we derivethe following time-approximation trade-off for γ -LDP. Theorem 1.4 (Algorithm for LDP) . For any log n ≤ k ≤ n , there is an algorithm solving k O ( n/k +log n ) -LDP in time O ( k ) . In other words, using the bounds in Theorem 1.1 together with known algorithms, we are ableto approximate the distortion between two lattices. But, with a bit more work, we are able to solve search
LDP by explicitly computing a low-distortion mapping between the input lattices.We also prove the following lower bound for LDP.
Theorem 1.5 (Hardness of LDP) . γ -GapLDP is NP-hard under randomized polynomial-time re-ductions for any constant γ ≥ . In particular, we show a reduction from approximating the (decisional) Shortest Vector Problem(GapSVP) over lattices to γ -GapLDP, where the approximation factor that we obtain for GapSVPis O ( γ ). Since hardness of GapSVP is quite well-studied [Ajt98, Mic01, Kho05, HR12], we areimmediately able to import many hardness results to GapLDP. (See Corollary 4.7 and Theorem 4.8for the precise statements.) The main related work of which we are aware is that of Haviv and Regev [HR14] on the LatticeIsomorphism Problem (LIP). In their paper, they give an n O ( n ) -time algorithm for solving LIPexactly, which proceeds by cleverly identifying a small candidate set of bases of L and L thatmust be mapped to each other by any isomorphism. One might expect that such an approachshould also work for the purpose of solving LDP either exactly or for approximation factors below n O (log n ) . However, the crucial assumption in LIP, that vectors in one lattice must be mapped tovectors of the same length in the other, completely breaks down in the current context. We thusdo not know how to extend their techniques to LDP.Much more generally, we note that LIP is closely related to the Graph Isomorphism Problem(GI). For example, both problems are in SZK but not known to be in P (although recent work onalgorithms for GI has been quite exciting [Bab16]!), and GI reduces to LIP [SSV09]. Therefore,LDP is qualitatively similar to the Approximate Graph Isomorphism Problem, which was studied byArora, Frieze, and Kaplan [AFK02], who showed an upper bound, and Arvind, K¨obler, Kuhnert,and Vasudev [AKKV12], who proved both upper and lower bounds. In particular, [AKKV12]showed that various versions of this problem are NP -hard to approximate to within a constantfactor. Qualitatively, these hardness results are similar to our Theorem 1.5. In conclusion, we introduce the Lattice Distortion Problem and show a connection between LDPand the notion of Seysen-reduced bases. We use this connection to derive time-approximation trade-offs for LDP. We also prove approximation hardness for GapLDP, showing a qualitative differencewith LIP (which is unlikely to be NP -hard under reasonable complexity theoretic assumptions).4ne major open question is what the correct bound in Theorem 1.3 is. In particular, there areno known families of lattices for which the Seysen condition number is provably superpolynomial,and hence it is possible that S ( L ) = poly( n ) for any n -dimensional lattice L . A better bound wouldimmediately improve our Theorem 1.2 and give a better approximation factor for GapLDP.We also note that all of our algorithms solve LDP for arguably very large approximation factors n Ω(log n ) . We currently do not even know whether there exists a fixed-dimension polynomial-timealgorithm for γ -LDP for any γ = n o (log n ) . The main problem here is that we do not have any goodcharacterization of nearly optimal distortion mappings between lattices. Organization
In Section 2, we present necessary background material. In Section 3, we give our approximationsfor lattice distortion, proving Theorems 1.2 and 1.4. In Section 4, we give the hardness for latticedistortion, proving Theorem 1.5. In Section 5, we give some illustrative example instances of latticedistortion.
Acknowledgements
We thank Oded Regev for pointing us to Seysen’s paper and for many helpful conversations. Theconcise proof of Lemma ?? using the Transference Theorem is due to Michael Walter. We thankPaul Kirchner for telling us about Proposition 2.13, and for identifying a minor bug in an earlierversion of this paper. For x ∈ R n , we write k x k for the Euclidean norm of x . We omit any mention of the bit length inthe running time of our algorithms. In particular, all of our algorithms take as input vectors in Q n and run in time f ( n ) · poly( m ) for some f , where m is the maximal bit length of an input vector.We therefore suppress the factor of poly( m ). The i th successive minimum of a lattice L is defined as λ i ( L ) = inf { r > rB n ∩ L )) ≥ i } .That is, the first successive minimum is the length of the shortest non-zero lattice vector, the secondsuccessive minimum is the length of the shortest lattice vector which is linearly independent of avector achieving the first, and so on. When L is clear from context, we simply write λ i .The dual lattice of L is defined as L ∗ = { x ∈ R n : ∀ y ∈ L h x , y i ∈ Z } . If L = L ( B ) then L ∗ = L ( B ∗ ) where B ∗ = B − T , the inverse transpose of B . We call B ∗ = [ b ∗ , . . . , b ∗ n ] the dual basis of B , and write λ ∗ i = λ i ( L ∗ ). We will repeatedly use Banaszczyk’s Transference Theorem, whichrelates the successive minima of a lattice to those of its dual. Theorem 2.1 (Banaszczyk’s Transference Theorem [Ban93]) . For every rank n lattice L and every i ∈ [ n ] , ≤ λ i ( L ) λ n − i +1 ( L ∗ ) ≤ n . Given a lattice L , we define the determinant of L as det( L ) := | det( B ) | , where B is a basiswith L ( B ) = L . Since two bases B, B ′ of L differ by a unimodular transformation, we have that | det( B ) | = | det( B ′ ) | so that det( L ) is well-defined.5e sometimes work with lattices that do not have full rank—i.e., lattices generated by d linearlyindependent vectors L = L ( b , . . . , b d ) with d < n . In this case, we simply identify span( b , . . . , b d )with R d and consider the lattice to be embedded in this space. We next characterize linear mappings between lattices in terms of bases.
Lemma 2.2.
Let L , L be full-rank lattices. Then a mapping T : L → L is bijective and linearif and only if T = BA − for some bases A, B of L , L respectively. In particular, for any basis A of L , T ( A ) is a basis of L .Proof. We first show that such a mapping is a bijection from L to L . Let T = BA − where A = [ a , . . . , a n ] and B = [ b , . . . , b n ] are bases of L , L respectively. Because T has full rank,it is injective as a mapping from R n to R n , and it is therefore injective as a mapping from L to L . We have that for every w ∈ L , w = P ni =1 c i b i with c i ∈ Z . Let v = P ni =1 c i a i ∈ L . Then, T ( v ) = T ( P ni =1 c i a i ) = P ni =1 c i b i = w . Therefore, T is a bijection from L to L .We next show that any linear map T with T ( L ) = L must have this form. Let A = [ a , . . . , a n ]be a basis of L , and let B = T ( A ). We claim that B = [ b , . . . , b n ] is a basis of L .Let w ∈ L . Because T is a bijection between L and L , there exists v ∈ L such that T v = w .Using the definition of a basis and the linearity of T , w = T v = T (cid:0) n X i =1 c i a i (cid:1) = n X i =1 c i b i , for some c , . . . , c n ∈ Z . Because w was picked arbitrarily, it follows that B is a basis of L . S ( B ) Seysen shows how to take any basis with relatively low multiplicative drop in its Gram-Schmidtvectors and convert it into a basis with relatively low S ( B ) = max i k b i kk b ∗ i k [Sey93]. By combiningthis with Gama and Nguyen’s slide reduction technique [GN08], we obtain the following result. Theorem 2.3.
For every log n ≤ k ≤ n there exists an algorithm that takes a lattice L as inputand computes a basis B of L with S ( B ) ≤ k O ( n/k +log k ) in time O ( k ) . In particular, applying Seysen’s procedure to slide-reduced bases suffices. We include a proofof Theorem 2.3 and a high-level description of Seysen’s procedure in Section 2.6.
Definition 2.4.
For any γ = γ ( n ) ≥
1, the γ -Lattice Distortion Problem ( γ -LDP) is the searchproblem defined as follows. The input consists of two lattices L , L (represented by bases B , B ∈ Q n × n ). The goal is to output a matrix T ∈ R n × n such that T ( L ) = L and κ ( T ) ≤ γ · D ( L , L ). Definition 2.5.
For any γ = γ ( n ) ≥
1, the γ -GapLDP is the promise problem defined as follows.The input consists of two lattices L , L (represented by bases B , B ∈ Q n × n ) and a number c ≥ D ( L , L ) ≤ c and a ‘NO’ instance where D ( L , L ) > γ · c . 6 .5 Complexity of LDP We show some basic facts about the complexity of GapLDP. First, we show that the LatticeIsomorphism Problem (LIP) corresponds to the special case of GapLDP where c = 1. LIP takesbases of L , L as input and asks if there exists an orthogonal linear transformation O such that O ( L ) = L . Haviv and Regev [HR14] show that there exists an n O ( n ) -time algorithm for LIP, andthat LIP is in the complexity class SZK . Lemma 2.6.
There is a polynomial-time reduction from LIP to -GapLDP.Proof. Let L , L be an LIP instance. First check that det( L ) = det( L ). If not, then output atrivial ‘NO’ instance of 1-GapLDP. Otherwise, map the LIP instance to the 1-GapLDP instancewith the same input bases and c = 1. For any T : L → L , we must have det( T ) = 1, and therefore κ ( T ) = 1 if and only if k T k = k T − k = 1. So, this is a ‘YES’ instance of GapLDP if and only if L , L are isomorphic. Lemma 2.7. -GapLDP is in NP .Proof. Let I = ( L , L , c ) be an instance of GapLDP, and let s be the length of I . We will showthat for a ‘YES’ instance, there are bases A, B of L , L respectively such that T = BA − requiresat most poly( s ) bits to specify and κ ( T ) ≤ c . Assume without loss of generality that L , L ⊆ Z n .Otherwise, scale the input lattices to achieve this at the expense of a factor s blow-up in input size.To satisfy k T kk T − k ≤ c , we must have that | t ij | ≤ k T k ≤ c · det( L ) / det( L ) ≤ c · det( L ) foreach entry t ij of T . By Cramer’s rule, each entry of A − and hence T will be an integer multipleof L , so we can assume without loss of generality that the denominator of each entry of T isdet L .Combining these bounds and applying Hadamard’s inequality, we get that | t ij | takes at mostlog( c · det( L ) det( L )) ≤ log (cid:16) c · n Y i =1 k a i k n Y i =1 k b i k (cid:17) bits to specify. Accounting for the sign of each t ij , it follows that T takes at most n · log(2 c · Q ni =1 k a i kk b i k ) ≤ n · ( s + 1) bits to specify.We remark that we can replace c with the quantity n O (log n ) M ( L , L ) M ( L , L ) (as given bythe upper bound in Theorem 1.1) in the preceding argument to obtain an upper bound on thedistortion of an optimal mapping T that does not depend on c . In this section, we define various notions of basis reductions and show how to use them to proveTheorem 2.3.For a basis B = [ b , . . . , b n ], we write π ( B ) i := π { b ,..., b i − } ⊥ to represent projection onto thesubspace { b , . . . , b i − } ⊥ . We then define the Gram-Schmidt orthogonalization of B , ( e b , . . . , e b n )as e b i = π ( B ) i ( b i ). By construction the vectors e b , . . . , e b n are orthogonal, and each b i is a linearcombination of e b , . . . , e b i . We define µ ij = h b i , e b j ih e b j , e b j i .7e define the QR-decomposition of a full-rank matrix B as B = QR where Q has orthonormalcolumns, and R is upper triangular. The QR-decomposition of a matrix is unique, and can becomputed efficiently by applying Gram-Schmidt orthogonalization to the columns of B .Unimodular matrices, denoted GL ( n, Z ), form the multiplicative group of n × n matrices withinteger entries and determinant ± Fact 2.8. L ( B ) = L ( B ′ ) if and only if there exists U ∈ GL ( n, Z ) such that B ′ = B · U . Based on this, a useful way to view basis reduction is as right-multiplication by unimodularmatrices.
A very strong notion of basis reduction introduced by Korkine and Zolotareff [KZ73] gives one wayof formalizing what it means to be a “shortest-possible” lattice basis.
Definition 2.9 ([KZ73], Definition 1 in [Sey93]) . Let B be a basis of L . B = [ b , . . . , b n ] is HKZ(Hermite-Korkine-Zolotareff) reduced if1. ∀ j < i, | µ ij | ≤ ;2. k b k = λ ( L ( B )); and3. if n >
1, then [ π ( B )2 ( b ) , . . . , π ( B )2 ( b n )] is an HKZ basis of π ( B )2 ( L ).By definition, the first vector b in an HKZ basis is a shortest vector in the lattice. Furthermore,computing an HKZ basis can be achieved by making n calls to an SVP oracle. So, the two problemshave the same time complexity up to a factor of n . In particular, computing HKZ bases is NP -hard.Gama and Nguyen (building on the work of Schnorr [Sch87]) introduced the notion of slide-reduced bases [GN08], which can be thought of as a relaxed notion of HKZ bases that can becomputed more efficiently. Definition 2.10 ([GN08, Definition 1]) . Let B be a basis of L ⊂ Q n and ε >
0. We say that B is ε -DSVP (dual SVP) reduced if its corresponding dual basis [ b ∗ , . . . , b ∗ n ] satisfies k b ∗ n k ≤ (1+ ε ) · λ ( L ∗ ).Then, for k ≥ n , we say that B = [ b , . . . , b n ] is ( ε, k )-slide reduced if1. ∀ j < i, | µ ij | ≤ ;2. ∀ ≤ i ≤ n/k −
1, the “projected truncated basis” [ π ( B ) ik +1 ( b ik +1 ) , . . . , π ( B ) ik +1 ( b ik + k )] is HKZreduced; and3. ∀ ≤ i ≤ n/k −
2, the “shifted projected truncated basis” [ π ( B ) ik +2 ( b ik +2 ) , . . . , π ( B ) ik +2 ( b ik + k +1 )]is ε -DSVP reduced. Theorem 2.11 ([GN08]) . There is an algorithm that takes as input a lattice
L ⊂ Q n , ε > , andinteger k ≥ log n dividing n and outputs a ( k, ε ) -slide-reduced basis of L in time poly(1 /ε ) · O ( k ) .
8e will be particularly concerned with the ratios between the length of the Gram-Schmidtvectors of a given basis. We prefer bases whose Gram-Schmidt vectors do not “decay too quickly,”and we measure this decay by η ( B ) = max i ≤ j k e b i kk e b j k . Previous work bounded η ( B ) for HKZ-reduced bases as follows. Theorem 2.12 ([LLS90, Proposition 4.2]) . For any HKZ-reduced basis B over Q n , η ( B ) ≤ n O (log n ) . We now use Theorem 2.12 and some of the results in [GN08] to bound η ( B ) of slide-reducedbases. Proposition 2.13.
For any integer k ≥ dividing n , if B is an (1 /n, k ) -slide-reduced basis for alattice L ⊂ Q n , then η ( B ) ≤ k O ( n/k +log k ) .Proof. We collect three simple inequalities that will together imply the result. First, from [GN08,Eq. (16)], we have k e b k ≤ k O ( n/k ) · k e b jk +1 k for all 0 ≤ j ≤ n/k −
1. Noting that the projection[ π ik +1 ( b ik +1 ) , . . . , π ik +1 ( b ik + k )] of a slide-reduced basis is also slide reduced, we see that k e b ik +1 k ≤ k O ( n/k ) · k e b jk +1 k , (3)for all 0 ≤ i ≤ j ≤ n/k −
1. Next, from Theorem 2.12 and the fact that the “projected truncatedbases” are HKZ reduced, we have that k e b ik + ℓ k ≤ k O (log k ) · k e b ik + ℓ ′ k , (4)for all 1 ≤ ℓ ≤ ℓ ′ ≤ k . Finally, [GN08] observe that k e b ik + k k ≤ C · k e b ik + k +1 k , (5)for all 0 ≤ i ≤ n/k −
2, where
C > ≤ i ≤ i ′ ≤ n/k − ≤ ℓ, ℓ ′ ≤ k such that ik + ℓ < i ′ k + ℓ ′ . If i = i ′ , then clearly k e b ik + ℓ k / k e b i ′ k + ℓ ′ k ≤ k O (log k ) by Eq. (4). Otherwise, i < i ′ and k e b ik + ℓ kk e b i ′ k + ℓ ′ k ≤ k O (log k ) · k e b ik + ℓ kk e b i ′ k +1 k (Eq. (4)) ≤ k O ( n/k +log k ) · k e b ik + ℓ kk e b ik + k +1 k (Eq. (3)) ≤ k O ( n/k +log k ) · k e b ik + ℓ kk e b ik + k k (Eq. (5)) ≤ k O ( n/k +log k ) (Eq. (4)) , as needed.Finally, we show how to get rid of the requirement that k divides n . They actually observe that a slide-reduced basis is LLL reduced, which immediately implies Eq. (5). roposition 2.14. For any log n ≤ k ≤ n , there is an algorithm that takes as input a lattice L ⊂ Q n and outputs a basis B of L such that η ( B ) ≤ k O ( n/k +log k ) . Furthermore, the algorithmruns in time O ( k ) .Proof. We assume without loss of generality that k is an integer. Let n ′ = ⌈ n/k ⌉ · k be the smallestinteger greater than or equal to n that is divisible by k . On input a basis ˆ B = [ˆ b , . . . , ˆ b n ] forthe lattice L ⊂ Q n , the algorithm behaves as follows. Let r = 2 Ω( n ) · max i k ˆ b i k . Let L ′ := L (ˆ b , . . . , ˆ b n , r · e n +1 , . . . , r · e n ′ ) ⊂ Q n ′ be “the lattice obtained by appending n ′ − n orthogonalvectors of length r to L .” The algorithm then computes a basis B ′ = [ b , . . . , b n ′ ] of L ′ as inTheorem 2.11 with ε = 1 /n and returns the basis consisting of the first n entries of B ′ , B =[ b , . . . , b n ].It follows immediately from Theorem 2.11 that the running time is as claimed, and from Propo-sition 2.13 we have that η ( B ) ≤ η ( B ′ ) ≤ k O ( n ′ /k +log k ) ≤ k O ( n/k +log k ) . So, we only need to provethat B is in fact a basis for L (as opposed to some other sublattice of L ′ ).Consider the first i such that k e b i k ≥ r . We claim that π ( B ′ ) i ( ˆ B ) = 0. If not, then choose j suchthat π ( B ′ ) i (ˆ b j ) = . There must be some ak + ℓ ≥ i with 1 ≤ ℓ ≤ k such that π ( B ′ ) ak + ℓ (ˆ b j ) = , but π ( B ′ ) ak + ℓ (ˆ b j ) ∈ span( π ak + ℓ ( b ak + ℓ ) , . . . , π ak + ℓ ( b ak + k )). It follows from the fact that B ′ is a basis of L ′ that π ( B ′ ) ak + ℓ (ˆ b j ) ∈ L ( π ak + ℓ ( b ak + ℓ ) , . . . , π ak + ℓ ( b ak + k )). But, since [ π ak + ℓ ( b ak + ℓ ) , . . . , π ak + ℓ ( b ak + k )] isan HKZ basis, it must be the case that k e b ak + ℓ k = k π ak + ℓ ( b ak + ℓ ) k ≤ k π ( B ′ ) ak + ℓ (ˆ b j ) k ≤ k ˆ b j k ≤ r Ω( n ) < k e b i k η ( B ′ ) , which contradicts the definition of η ( B ′ ).So, π ( B ′ ) i ( ˆ B ) = 0. It follows that i = n + 1 and L ⊂ span( b , . . . , b i − ) = span( B ). And, since B ′ is a basis of L ′ , it follows that L = L ( B ), as needed. Although slide-reduced bases B consist of short vectors and have bounded η ( B ), they make onlyweak guarantees about the length of vectors in the dual basis B ∗ . Of course, one way to computea basis whose dual will contain short dual basis is short is to simply compute B such that B ∗ is a suitably reduced basis of L ∗ . Such a basis B is called a dual-reduced basis, and sees use inapplications such as [HR14].However, we would like to compute a basis such that the vectors in B and B ∗ are both short,which Seysen addressed in his work [Sey93]. Seysen’s main result finds a basis B such that both B and B ∗ are short by dividing this problem into two subproblems. The first involves finding abasis with small η ( B ), as in Section 2.6.1. The second subproblem, discussed in [Sey93, Section3], involves conditioning unipotent matrices. Let N ( n, R ) be the multiplicative group of unipotent n × n -matrices. That is, a matrix A ∈ N ( n, R ) if a ii = 1 and a ij = 0 for i > j (i.e., A is uppertriangular and has ones on the main diagonal). Let N ( n, Z ) be the subgroup of N ( n, R ) withinteger entries. Because N ( n, Z ) is a subset of GL ( n, Z ), we trivially have that L ( B ) = L ( B · U )for every U ∈ N ( n, Z ). 10et k B k ∞ := max i,j ∈ [ n ] | b ij | denote the largest magnitude of an entry in B . We follow Sey-sen [Sey93] and define S ′ ( B ) = max {k B k ∞ , k B − k ∞ } . We also let ζ ( n ) = sup A ∈ N ( n, R ) (cid:8) inf U ∈ N ( n, Z ) { S ′ ( A · U ) } (cid:9) . Theorem 2.15 ([Sey93, Prop. 5 and Thm. 6]) . There exists an algorithm
Seysen that takes asinput A ∈ N ( n, R ) and outputs A · U where U ∈ N ( n, Z ) and S ′ ( A · U ) ≤ n O (log n ) in time O ( n ) .In particular, ζ ( n ) ≤ n O (log n ) . Let B = QR be a QR-decomposition of B . We may further decompose R as R = DR ′ , where d ii = k e b i k and r ′ ij = j < i ,1 if j = i , µ ji if j > i .In particular, note that R ′ ∈ N ( n, R ). It is easy to see that η ( B ) controls k D kk D − k . On the otherhand, using the bound on ζ ( n ), we can always multiply B on the right by U ∈ N ( n, Z ) to controlthe size of k R ′ kk R ′− k . Roughly speaking, these two facts imply Theorem 2.16. Theorem 2.16 ([Sey93, Theorem 7]) . Let B = Seysen ( B ′ ) where B ′ is a matrix. Then S ( B ) ≤ n · η ( B ′ ) · ζ ( n ) .Proof of Theorem 2.3. Let B = Seysen ( B ′ ), where B ′ is a basis as computed in Proposition 2.14.We then have that S ( B ) ≤ n · η ( B ′ ) · ζ ( n ) (by Theorem 2.16) ≤ n · k O ( n/k +log k ) · ζ ( n ) (by Proposition 2.14) ≤ n · k O ( n/k +log k ) · ( n O (log n ) ) (by Theorem 2.15) ≤ k O ( n/k +log k ) . We can compute B ′ in 2 O ( k ) time using Proposition 2.14. Moreover, by Theorem 2.15, Seysen runs in O ( n ) time. Therefore the algorithm runs in 2 O ( k ) time. In this section, we show how to compute low-distortion mappings between lattices by using baseswith low S ( B ). S ( B ) Call a basis B = [ b , . . . , b n ] sorted if k b k ≤ · · · ≤ k b n k . Clearly, k b i k /λ i ≥ B . Note that sorting B does not change S ( B ), since S ( · ) is invariant under permutations of thebasis vectors.A natural way to quantify the “shortness” of a lattice basis is to upper bound k b k k /λ k for all k ∈ [ n ]. For example, [LLS90] shows that k b k k /λ k ≤ √ n when B is an HKZ basis. We give acharacterization of Seysen bases showing that in fact both the primal basis vectors and the dual basis11ectors are not much longer than the successive minima. Namely, S ( B ) is an upper bound on both k b k k /λ k and k b ∗ k k /λ ∗ n − k +1 for sorted bases B . Although we only use the fact that S ( B ) ≥ k b k k /λ k we show both bounds. Seysen [Sey93] gave essentially the same characterization, but we state andprove it here in a slightly different form. Lemma 3.1 (Theorem 8 in [Sey93]) . Let B be a sorted basis of L . Then for all k ∈ [ n ] ,1. k b k k /λ k ( L ) ≤ S ( B ) .2. k b ∗ k k /λ ∗ n − k +1 ( L ) ≤ S ( B ) .Proof. For every k ∈ [ n ] we have k b k k /λ k ≤ k b k k λ ∗ n − k +1 (by the lower bound in Theorem 2.1) ≤ k b k k max i ∈{ k,...,n } k b ∗ i k (the b ∗ i are linearly independent) ≤ max i ∈{ k,...,n } k b i kk b ∗ i k ( B is sorted) ≤ S ( B ) . This proves Item 1. Furthermore, for every k ∈ [ n ] we have k b ∗ k k λ ∗ n − k +1 ≤ k b k kk b ∗ k k λ k λ ∗ n − k +1 ≤ max i ∈ [ n ] k b i kk b ∗ i k = S ( B ) . The first inequality follows from the assumption that B is sorted, and the second follows from thelower bound in Theorem 2.1. This proves Item 2. In this section, we bound the distortion D ( L , L ) between lattices L , L . The upper bound isconstructive and depends on S ( B ) , S ( B ), which naturally leads to Theorem 1.4. Lemma 3.2.
Let A = [ a , . . . , a n ] and B = [ b , . . . , b n ] be sorted bases of L , L respectively.Then, k BA − k ≤ nS ( A ) S ( B ) M ( L , L ) . roof. k BA − k = (cid:13)(cid:13)(cid:13) n X i =1 b i ( a ∗ i ) T (cid:13)(cid:13)(cid:13) ≤ n X i =1 (cid:13)(cid:13) b i ( a ∗ i ) T (cid:13)(cid:13) (by triangle inequality)= n X i =1 k b i kk a ∗ i k≤ n max i ∈ [ n ] k b i kk a ∗ i k≤ nS ( B ) max i ∈ [ n ] λ i ( L ) k a ∗ i k (by Item 1 in Lemma 3.1) ≤ nS ( A ) S ( B ) max i ∈ [ n ] λ i ( L ) / k a i k (by definition of S ( A )) ≤ nS ( A ) S ( B ) max i ∈ [ n ] λ i ( L ) /λ i ( L ) ( A is sorted)= nS ( A ) S ( B ) M ( L , L ) . Proof of Theorem 1.2.
Note that by definition there always exist bases B , B of L , L respectivelyachieving S ( B i ) = S ( L i ). Therefore, applying Lemma 3.2 twice to bound both k B B − k and k B B − k , we get the upper bound.For the lower bound, let v , . . . , v n ∈ L be linearly independent vectors such that k v i k = λ i ( L )for every i . Then, for every i , λ i ( L ) ≤ max j ∈ [ i ] k T v j k ≤ k T k max j ∈ [ i ] k v j k = k T k λ i ( L ) . Rearranging, we get that λ i ( L ) /λ i ( L ) ≤ k T k . This holds for arbitrary i , so in particularmax i ∈ [ n ] λ i ( L ) /λ i ( L ) = M ( L , L ) ≤ k T k . The same computation with L , L reversed showsthat M ( L , L ) ≤ k T − k . Multiplying these bounds together implies the lower bound in the theo-rem statement.We can now prove Theorem 1.4. Proof of Theorem 1.4.
Let ( L , L ) be an instance of LDP. For i = 1 ,
2, compute a basis B i of L i using the algorithm described in Theorem 2.3 with parameter k . We have that S ( B i ) ≤ k O ( n/k +log k ) . This computation takes 2 O ( k ) time. The algorithm then simply outputs T = B B − .By Lemma 3.2 and the upper bounds on S ( B i ), we get that κ ( T ) ≤ k O ( n/k +log k ) · M ( L , L ) · M ( L , L ). This is within a factor of k O ( n/k +log k ) · n O (log n ) = k O ( n/k +log k ) of D ( L , L ) byTheorem 1.1. So, the algorithm is correct. In this section, we prove the hardness of γ -GapLDP. (See Theorem 4.8.) Our reduction works intwo steps. First, we show how to use an oracle for GapLDP to solve a variant of GapCVP that13e call γ -GapCVP α . (See Definition 4.1 and Theorem 4.3.) Given a CVP instance consisting of alattice L and a target vector t , our idea is to compare “ L with t appended to it” to “ L with anextra orthogonal vector appended to it.” (See Eq. (6).) We show that, if dist( t , L ) is small, thenthese lattices will be similar. On the other hand, if (1) dist( k t , L ) is large for all non-zero integers k , and (2) λ ( L ) is not too small; then the two lattices must be quite dissimilar.We next show that γ -GapCVP α is as hard as GapSVP. (See Theorem 4.6.) This reduction isa variant of the celebrated reduction of [GMSS99]. It differs from the original in that it “worksin base p ” instead of in base two, and it “adds an extra coordinate to t .” We show that this issufficient to satisfy the promises required by γ -GapCVP α .Both reductions are relatively straightforward. Definition 4.1.
For any γ = γ ( n ) ≥ α = α ( n ) > γ -GapCVP α is the promise problemdefined as follows. The input is a lattice L ⊂ Q n , a target t ∈ Q n , and a distance d >
0. It is a‘YES’ instance if dist( t , L ) ≤ d and a ‘NO’ instance if dist( k t , L ) > γd for all non-zero integers k and d < α · λ ( L ).We will need the following characterization of the operator norm of a matrix in terms of itsbehavior over a lattice. Intuitively, this says that “a lattice has a point in every direction.” Fact 4.2.
For any matrix A ∈ R n × n and (full-rank) lattice L ⊂ R n , k A k = sup y ∈L\{ } k A y kk y k . Proof.
It suffices to note that, for any x ∈ R n with k x k = 1 and any full-rank lattice L ⊂ R n , thereis a sequence y , y , . . . of vectors y i ∈ L such thatlim m →∞ y m k y m k = x . Indeed, this follows immediately from the fact that the rationals are dense in the reals.
Theorem 4.3.
For any γ = γ ( n ) ≥ , there is an efficient reduction from γ ′ - GapCVP /γ ′ to γ -GapLDP, where γ ′ = O ( γ ) .Proof. On input
L ⊂ Q n with basis ( b , . . . , b n ), t ∈ Q n , and d >
0, the reduction behavesas follows. Let L := L ( b , . . . , b n , r · e n +1 ) with r > L := L ( b , . . . , b n , t + r · e n +1 ). I.e., L = L (cid:18) B r (cid:19) L = L (cid:18) B t r (cid:19) . (6)(Formally, we must embed the b i and t in Q n +1 under the natural embedding, but we ignore thisfor simplicity.) The reduction then calls its γ -GapLDP oracle with input L , L , and c > t , L ) ≤ d . We note that L does not change if we shift t by a lattice vector. So, we may assume without loss of generalitythat is a closest lattice vector to t and therefore k t k ≤ d .14et B := [ b , . . . , b n , r · e n +1 ] and B := [ b , . . . , b n , t + r · e n +1 ] be the bases from the reduction.It suffices to show that κ ( B B − ) is small. Indeed, for any y ∈ L , we can write y = ( y ′ , kr ) forsome k ∈ Z and y ′ ∈ L . Then, we have k B B − y k = k ( y ′ + k t , kr ) k ≤ k ( y ′ , kr ) k + | k |k t k ≤ (1 + d/r ) k y k . Similarly, k B B − y k ≥ k y k − | k |k t k ≥ (1 − d/r ) k y k . Therefore, by Fact 4.2, κ ( B B − ) ≤ (1 + d/r ) / (1 − d/r ). So, we take c := (1 + d/r ) / (1 − d/r ), and the oracle will therefore output ‘YES’.Now, suppose dist( z t , L ) > γd for all non-zero integers z , and λ ( L ) > γd . (I.e., we take γ ′ = 10 γ = O ( γ ).) Let A be a linear map with A L = L . Note that A has determinant one, sothat κ ( A ) ≥ k A x kk x k for any x ∈ Q n +1 \ { } . We have that A ( , r ) = ( y ′ , kr ) for some y ′ ∈ L + k t and k ∈ Z . If k = 0, then k A ( , r ) k ≥ dist( k t , L ) > γd . So, κ ( A ) ≥ k A ( , r ) k /r > γd/r .If, on the other hand, k = 0, then y ′ ∈ L \ { } and k A ( , r ) k = k ( y ′ , k ≥ λ ( L ) > γd , sothat we again have κ ( A ) ≥ k A ( , r ) k /r > γd/r . Taking r = 2 γd gives κ ( A ) > γ · c , so that theoracle will output ‘NO’, as needed. We recall the definition of (the decision version of) γ -GapSVP. Definition 4.4.
For any γ = γ ( n ) ≥ γ -GapSVP is the promise problem defined as follows: Theinput is a lattice L ⊂ Q n , and a distance d >
0. It is a ‘YES’ instance if λ ( L ) ≤ d and a ‘NO’instance if λ ( L ) > γd .Haviv and Regev (building on work of Ajtai, Micciancio, and Khot [Ajt98, Mic01, Kho05])proved the following strong hardness result for γ -GapSVP [HR12]. Theorem 4.5 ([HR12, Theorem 1.1]) . γ -GapSVP is NP -hard under randomized polynomial-time reductions for any constant γ ≥ .I.e., there is no randomized polynomial-time algorithm for γ -GapSVP unless NP ⊆ RP .2. log − ε n -GapSVP is NP -hard under randomized quasipolynomial-time reductions for any con-stant ε > . I.e., there is no randomized polynomial-time algorithm for log − ε n -GapSVPunless NP ⊆ RTIME (2 polylog( n ) ) .3. n c/ log log n -GapSVP is NP -hard under randomized subexponential-time reductions for someuniversal constant c > . I.e., there is no randomized polynomial-time algorithm for n c/ log log n -GapSVP unless NP ⊆ RSUBEXP := T δ> RTIME (2 n δ ) . In particular, to prove Theorem 1.5, it suffices to reduce γ ′ -GapSVP to γ -CVP /γ for γ ′ = O ( γ ). Theorem 4.6.
For any ≤ γ = γ ( n ) ≤ poly( n ) , there is an efficient reduction from γ ′ -GapSVPto γ - GapCVP /γ , where γ ′ = γ · (1 + o (1)) .Proof. Let p be a prime with 10 γn ≤ p ≤ γn ≤ poly( n ). We take γ ′ = γ · (1 + o (1)) so that γ = γ ′ p − γ ′ / ( p − .
15n input a basis B := [ b , . . . , b n ] for a lattice L ⊂ Q n , and d >
0, the reduction behaves asfollows. For i = 1 , . . . , n , let L i := L ( b , . . . , p b i , . . . , b n ) be “ L with its i th basis vector multipliedby p .” And, for all i and 1 ≤ j < p , let t i,j := j b i + r e n +1 , with r > i, j , the reduction calls its γ -GapCVP /γ oracle on input L i , t i,j , and d ′ := √ d + r . Finally,it outputs ‘YES’ if the oracle answered ‘YES’ for any query. Otherwise, it outputs ‘NO’.It is clear that the algorithm is efficient. Note thatdist( j b i , L i ) = min ((cid:13)(cid:13)(cid:13) n X ℓ =1 a ℓ b ℓ (cid:13)(cid:13)(cid:13) : a ℓ ∈ Z , a i ≡ j mod p ) . In particular, λ ( L ) = min i,j dist( j b i , L i ).So, suppose λ ( L ) ≤ d . Then, there must be some i, j such that dist( t i,j , L i ) ≤ r + λ ( L ) ≤ r + d = d ′ . So, the oracle answers ‘YES’ at least once.Now, suppose λ ( L ) > γ ′ d . Since L i ⊂ L , we have λ ( L i ) ≥ λ ( L ) > γ ′ d , and therefore d < λ ( L i ) /γ ′ < λ ( L i ) /γ , as needed. And, by the above observation, we have dist( j b i , L i ) ≥ λ ( L ) > γ ′ d for all 1 ≤ i ≤ n and 1 ≤ j < p . Furthermore, for any integer 1 ≤ z < p , we havedist( zj b i , L i ) = dist(( zj mod p ) · b i , L i ) > γ ′ d , where we have used the fact that p is prime so that zj p . It follows that dist( z t i,j , L i ) > dist( zj b i , L i ) > γ ′ d . And, for z ≥ p , it is triviallythe case that dist( z t i,j , L i ) ≥ zr ≥ pr . Taking r := γ ′ d/ ( p − z t i,j , L i ) > γ ′ d = γ ′ d ′ √ − r = γ ′ d ′ p − γ ′ / ( p − = γd . So, the oracle will always answer ‘NO’.
Corollary 4.7.
For any ≤ γ = γ ( n ) ≤ poly( n ) , there is an efficient reduction from γ ′ -GapSVPto γ -GapLDP, where γ ′ = O ( γ ) .Proof. Combine Theorems 4.3 and 4.6.With this, the proof of our main hardness result is immediate.
Theorem 4.8.
The three hardness results in Theorem 4.5 hold with GapLDP in place of GapSVP.Proof.
Combine Theorem 4.5 with Corollary 4.7.
We now show that, for every n , there exists a L such that D ( L , Z n ) ≥ Ω( √ n ) · M ( L , Z n ) · M ( Z n , L ).Indeed, it suffices to take any lattice with det( L ) /n ≤ O ( n − / ) but λ i ( L ) = Θ(1). (This is truefor almost all lattices in a certain precise sense. See, e.g., [Sie45].) Lemma 5.1.
For any n ≥ , there is a lattice L ⊂ Q n such that det( L ) /n ≤ O ( n − / ) and λ i ( L ) = Θ(1) for all i . Proposition 5.2.
For any n ≥ , there exists a lattice L ⊂ Q n such that D ( L , Z n ) ≥ Ω( √ n ) · M ( L , Z n ) · M ( Z n , L ) . roof. Let
L ⊂ Q n be any lattice as in Lemma 5.1. In particular, M ( L , Z n ) · M ( Z n , L ) = O (1).However, for any linear map T with T ( L ) = Z n , we of course have k T k ≥ | det( T ) | /n = det( Z n ) /n / det( L ) /n ≥ Ω( √ n ) . (To see the first inequality, it suffices to recall that | det( T ) | = Q σ i and k T k = max σ i , wherethe σ i are the singular values of T .) And, T − e must be a non-zero lattice vector, so k T − k ≥k T − e k ≥ λ ( L ) ≥ Ω(1). Therefore, κ ( T ) = k T kk T − k ≥ Ω( √ n ), as needed. We show an example demonstrating that mappings between lattices built using HKZ bases are non-optimal in terms of their distortion. Let B n be the n × n upper-triangular matrix with diagonalentries equal to 1 and upper triangular off-diagonal entries equal to − . I.e., B n has entries b ij = j < i ,1 if j = i , − if j > i .Luk and Tracy [LT08] introduced the family { B n } as an example of bases that are well-reducedbut poorly conditioned. Indeed it is not hard to show that { B n } are HKZ bases that neverthelesshave κ ( B n ) = Ω(1 . n ). We use these bases to show the necessity of using Seysen reduction even onHKZ bases. Theorem 5.3.
For every n ≥ , there exists an n × n HKZ basis B such that D ( Z n , L ( B )) ≤ n O (log n ) , but κ ( B ) ≥ Ω(1 . n ) .Proof. Let B ′ = B n be an HKZ basis in the family described above, and take I n as the basis of Z n .Then κ ( B ′ · I n ) = Ω(1 . n ).On the other hand, let B = Seysen ( B ′ ). Then, because η ( B ′ ) = 1, S ( B ) = n O (log n ) byTheorem 2.16. Clearly, λ i ( Z n ) = 1 for all i ∈ [ n ]. On the other hand, 1 ≤ λ i ( L ( B )) ≤ √ n for all i ∈ [ n ]. The lower bound holds because min k e b i k = 1, and the upper bound comes fromthe fact that k b ′ i k ≤ √ n for all i ∈ [ n ] and the linear independence of the b ′ i . It follows that M ( Z n , L ( B )) ≤ √ n and M ( L ( B ) , Z n ) ≤
1. Applying Lemma 3.2 to B and B − , we then get that κ ( B · I n ) ≤ n O (log n ) . References [AFK02] Sanjeev Arora, Alan Frieze, and Haim Kaplan. A new rounding procedure for the assign-ment problem with applications to dense graph arrangement problems.
MathematicalProgramming , 2002.[Ajt96] Mikl´os Ajtai. Generating hard instances of lattice problems. In
STOC , 1996.[Ajt98] Mikl´os Ajtai. The Shortest Vector Problem in L2 is NP-hard for randomized reductions.In
STOC , 1998. In fact, λ n ( L ( B )) = O (1). Mathematical Foundations of Computer Science , 2012.[AKS01] Mikl´os Ajtai, Ravi Kumar, and D. Sivakumar. A sieve algorithm for the Shortest LatticeVector Problem. In
STOC , pages 601–610, 2001.[Bab16] L. Babai. Graph Isomorphism in quasipolynomial time, 2016. http://arxiv.org/abs/1512.03547 .[Ban93] W. Banaszczyk. New bounds in some transference theorems in the geometry of numbers.
Mathematische Annalen , 296(1):625–635, 1993.[BV14] Zvika Brakerski and Vinod Vaikuntanathan. Lattice-based FHE as secure as PKE. In
ITCS , 2014.[CS98] J. Conway and N.J.A. Sloane.
Sphere Packings, Lattices and Groups . Springer NewYork, 1998.[Gen09] Craig Gentry. Fully homomorphic encryption using ideal lattices. In
STOC , 2009.[GMSS99] Oded Goldreich, Daniele Micciancio, Shmuel Safra, and Jean-Paul Seifert. Approxi-mating shortest lattice vectors is not harder than approximating closest lattice vectors.
Information Processing Letters , 71(2):55 – 61, 1999.[GN08] Nicolas Gama and Phong Q. Nguyen. Finding short lattice vectors within Mordell’sinequality. In
STOC , 2008.[GPV08] Craig Gentry, Chris Peikert, and Vinod Vaikuntanathan. Trapdoors for hard latticesand new cryptographic constructions. In
STOC , 2008.[HR12] Ishay Haviv and Oded Regev. Tensor-based hardness of the Shortest Vector Prob-lem to within almost polynomial factors.
Theory of Computing , 8(23):513–531, 2012.Preliminary version in STOC’07.[HR14] Ishay Haviv and Oded Regev. On the Lattice Isomorphism Problem. In
SODA , 2014.[Kan87] Ravi Kannan. Minkowski’s convex body theorem and Integer Programming.
Mathe-matics of Operations Research , 12(3):pp. 415–440, 1987.[Kho05] Subhash Khot. Hardness of approximating the Shortest Vector Problem in lattices.
Journal of the ACM , 52(5):789–808, September 2005. Preliminary version in FOCS’04.[KZ73] A. Korkine and G. Zolotareff. Sur les formes quadratiques.
Mathematische Annalen ,6(3):366–389, 1873.[LLL82] A.K. Lenstra, H.W. Lenstra Jr., and L. Lov´asz. Factoring polynomials with rationalcoefficients.
Mathematische Annalen , 261(4):515–534, 1982.[LLS90] J. C. Lagarias, Hendrik W. Lenstra Jr., and Claus-Peter Schnorr. Korkin-Zolotarevbases and successive minima of a lattice and its reciprocal lattice.
Combinatorica ,10(4):333–348, 1990. 18LS14] Hendrik W. Lenstra Jr. and Alice Silverberg. Lattices with symmetry, 2014. http://arxiv.org/abs/1501.00178 .[LT08] Franklin T. Luk and Daniel M. Tracy. An improved lll algorithm.
Linear Algebra andits Applications , 428(2):441 – 452, 2008.[Mic01] Daniele Micciancio. The Shortest Vector Problem is NP-hard to approximate to withinsome constant.
SIAM Journal on Computing , 30(6):2008–2035, March 2001. Prelimi-nary version in FOCS 1998.[Min10] H. Minkowski.
Geometrie der Zahlen . Number v. 1 in Geometrie der Zahlen. B.G.Teubner, 1910.[MR07] Daniele Micciancio and Oded Regev. Worst-case to average-case reductions based onGaussian measures.
SIAM Journal on Computing , 37(1):267–302, 2007.[MV13] Daniele Micciancio and Panagiotis Voulgaris. A deterministic single exponential timealgorithm for most lattice problems based on Voronoi cell computations.
SIAM J.Comput. , 42(3):1364–1391, 2013.[PS97] W. Plesken and B. Souvignier. Computing isometries of lattices.
J. Symbolic Comput. ,24(3-4):327–334, 1997. Computational algebra and number theory (London, 1993).[Reg09] Oded Regev. On lattices, Learning with Errors, random linear codes, and cryptography.
Journal of the ACM , 56(6):Art. 34, 40, 2009.[Sch87] C.P. Schnorr. A hierarchy of polynomial time lattice basis reduction algorithms.
The-oretical Computer Science , 1987.[Sey93] Martin Seysen. Simultaneous reduction of a lattice basis and its reciprocal basis.
Com-binatorica , 13(3):363–376, 1993.[Sie45] Carl Ludwig Siegel. A mean value theorem in geometry of numbers.
Annals of Mathe-matics , 46(2):pp. 340–347, 1945.[SSV09] Mathieu Dutour Sikiric, Achill Sch¨urmann, and Frank Vallentin. Complexity and al-gorithms for computing Voronoi cells of lattices.