Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guy N. Rothblum is active.

Publication


Featured researches published by Guy N. Rothblum.


foundations of computer science | 2010

A Multiplicative Weights Mechanism for Privacy-Preserving Data Analysis

Moritz Hardt; Guy N. Rothblum

We consider statistical data analysis in the interactive setting. In this setting a trusted curator maintains a database of sensitive information about individual participants, and releases privacy-preserving answers to queries as they arrive. Our primary contribution is a new differentially private multiplicative weights mechanism for answering a large number of interactive counting (or linear) queries that arrive online and may be adaptively chosen. This is the first mechanism with worst-case accuracy guarantees that can answer large numbers of interactive queries and is {\em efficient} (in terms of the runtimes dependence on the data universe size). The error is asymptotically \emph{optimal} in its dependence on the number of participants, and depends only logarithmically on the number of queries being answered. The running time is nearly {\em linear} in the size of the data universe. As a further contribution, when we relax the utility requirement and require accuracy only for databases drawn from a rich class of databases, we obtain exponential improvements in running time. Even in this relaxed setting we continue to guarantee privacy for {\em any} input database. Only the utility requirement is relaxed. Specifically, we show that when the input database is drawn from a {\em smooth} distribution — a distribution that does not place too much weight on any single data item — accuracy remains as above, and the running time becomes {\em poly-logarithmic} in the data universe size. The main technical contributions are the application of multiplicative weights techniques to the differential privacy setting, a new privacy analysis for the interactive setting, and a technique for reducing data dimensionality for databases drawn from smooth distributions.


theory of cryptography conference | 2014

Virtual Black-Box Obfuscation for All Circuits via Generic Graded Encoding

Zvika Brakerski; Guy N. Rothblum

We present a new general-purpose obfuscator for all polynomial size circuits. The obfuscator uses graded encoding schemes, a generalization of multilinear maps. We prove that the obfuscator exposes no more information than the program’s black-box functionality, and achieves virtual black-box security, in the generic graded encoded scheme model. This proof is under the Bounded Speedup Hypothesis (BSH, a plausible worst-case complexity-theoretic assumption related to the Exponential Time Hypothesis), in addition to standard cryptographic assumptions. We also prove that it satisfies the notion of indistinguishability obfuscation without without relying on BSH (in the same generic model and under standard cryptographic assumptions).


theory of cryptography conference | 2007

Securely obfuscating re-encryption

Susan Hohenberger; Guy N. Rothblum; Abhi Shelat; Vinod Vaikuntanathan

We present the first positive obfuscation result for a traditional cryptographic functionality. This positive result stands in contrast to well-known negative impossibility results [BGI+01] for general obfuscation and recent negative impossibility and improbability [GK05] results for obfuscation of many cryptographic functionalities. Whereas other positive obfuscation results in the standard model apply to very simple point functions, our obfuscation result applies to the significantly more complicated and widely-used re-encryption functionality. This functionality takes a ciphertext for message m encrypted under Alices public key and transforms it into a ciphertext for the same message m under Bobs public key. To overcome impossibility results and to make our results meaningful for cryptographic functionalities, we use a new definition of obfuscation. This new definition incorporates more security-aware provisions.


computer and communications security | 2011

Practical delegation of computation using multiple servers

Ran Canetti; Ben Riva; Guy N. Rothblum

The current move to Cloud Computing raises the need for verifiable delegation of computations, where a weak client delegates his computation to a powerful server, while maintaining the ability to verify that the result is correct. Although there are prior solutions to this problem, none of them is yet both general and practical for real-world use. We demonstrate a relatively efficient and general solution where the client delegates the computation to several servers, and is guaranteed to determine the correct answer as long as even a single server is honest. We show: A protocol for any efficiently computable function, with logarithmically many rounds, based on any collision-resistant hash family. The protocol is set in terms of Turing Machines but can be adapted to other computation models. An adaptation of the protocol for the X86 computation model and a prototype implementation, called Quin, for Windows executables. We describe the architecture of Quin and experiment with several parameters on live clouds. We show that the protocol is practical, can work with nowadays clouds, and is efficient both for the servers and for the client.


theory of cryptography conference | 2010

Leakage-resilient signatures

Sebastian Faust; Eike Kiltz; Krzysztof Pietrzak; Guy N. Rothblum

The strongest standard security notion for digital signature schemes is unforgeability under chosen message attacks. In practice, however, this notion can be insufficient due to “side-channel attacks” which exploit leakage of information about the secret internal state. In this work we put forward the notion of “leakage-resilient signatures,” which strengthens the standard security notion by giving the adversary the additional power to learn a bounded amount of arbitrary information about the secret state that was accessed during every signature generation. This notion naturally implies security against all side-channel attacks as long as the amount of information leaked on each invocation is bounded and “only computation leaks information.” The main result of this paper is a construction which gives a (tree-based, stateful) leakage-resilient signature scheme based on any 3-time signature scheme. The amount of information that our scheme can safely leak per signature generation is 1/3 of the information the underlying 3-time signature scheme can leak in total. Signature schemes that remain secure even if a bounded total amount of information is leaked were recently constructed, hence instantiating our construction with these schemes gives the first constructions of provably secure leakage-resilient signature schemes. The above construction assumes that the signing algorithm can sample truly random bits, and thus an implementation would need some special hardware (randomness gates). Simply generating this randomness using a leakage-resilient stream-cipher will in general not work. Our second contribution is a sound general principle to replace uniform random bits in any leakage-resilient construction with pseudorandom ones: run two leakage-resilient stream-ciphers (with independent keys) in parallel and then apply a two-source extractor to their outputs.


international cryptology conference | 2010

Securing computation against continuous leakage

Shafi Goldwasser; Guy N. Rothblum

We present a general method to compile any cryptographic algorithm into one which resists side channel attacks of the only computation leaks information variety for an unbounded number of executions. Our method uses as a building block a semantically secure subsidiary bit encryption scheme with the following additional operations: key refreshing, oblivious generation of cipher texts, leakage resilience re-generation, and blinded homomorphic evaluation of one single complete gate (e.g. NAND). Furthermore, the security properties of the subsidiary encryption scheme should withstand bounded leakage incurred while performing each of the above operations. We show how to implement such a subsidiary encryption scheme under the DDH intractability assumption and the existence of a simple secure hardware component. The hardware component is independent of the encryption scheme secret key. The subsidiary encryption scheme resists leakage attacks where the leakage is computable in polynomial time and of length bounded by a constant fraction of the security parameter.


theory of cryptography conference | 2010

Obfuscation of hyperplane membership

Ran Canetti; Guy N. Rothblum; Mayank Varia

Previous work on program obfuscation gives strong negative results for general-purpose obfuscators, and positive results for obfuscating simple functions such as equality testing (point functions). In this work, we construct an obfuscator for a more complex algebraic functionality: testing for membership in a hyperplane (of constant dimension). We prove the security of the obfuscator under a new strong variant of the Decisional Diffie-Hellman assumption. Finally, we show a cryptographic application of the new obfuscator to digital signatures.


theory of cryptography conference | 2009

How Efficient Can Memory Checking Be

Cynthia Dwork; Moni Naor; Guy N. Rothblum; Vinod Vaikuntanathan

We consider the problem of memory checking, where a user wants to maintain a large database on a remote server but has only limited local storage. The user wants to use the small (but trusted and secret) local storage to detect faults in the large (but public and untrusted) remote storage. A memory checker receives from the user store and retrieve operations to the large database. The checker makes its own requests to the (untrusted) remote storage and receives answers to these requests. It then uses these responses, together with its small private and reliable local memory, to ascertain that all requests were answered correctly, or to report faults in the remote storage (the public memory). A fruitful line of research investigates the complexity of memory checking in terms of the number of queries the checker issues per user request (query complexity) and the size of the reliable local memory (space complexity). Blum et al., who first formalized the question, distinguished between online checkers (that report faults as soon as they occur) and offline checkers (that report faults only at the end of a long sequence of operations). In this work we revisit the question of memory checking, asking how efficient can memory checking be? For online checkers, Blum et al. provided a checker with logarithmic query complexity in n , the database size. Our main result is a lower bound: we show that for checkers that access the remote storage in a deterministic and non-adaptive manner (as do all known memory checkers), their query complexity must be at least *** (logn /loglogn ). To cope with this negative result, we show how to trade off the read and write complexity of online memory checkers: for any desired logarithm base d , we construct an online checker where either reading or writing is inexpensive and has query complexity O (log d n ). The price for this is that the other operation (write or read respectively) has query complexity O (d ·log d n ). Finally, if even this performance is unacceptable, offline memory checking may be an inexpensive alternative. We provide a scheme with O (1) amortized query complexity, improving Blum et al.s construction, which only had such performance for long sequences of at least n operations.


symposium on the theory of computing | 2007

Verifying and decoding in constant depth

Shafi Goldwasser; Dan Gutfreund; Alexander Healy; Tali Kaufman; Guy N. Rothblum

We develop a general approach for improving the efficiency of a computationally bounded receiver interacting with a powerful and possibly malicious sender. The key idea we use is that of delegating some of the receivers computation to the (potentially malicious) sender. This idea was recently introduced by Goldwasser et al. [14] in the area of program checking. A classic example of such a sender-receiver setting is interactive proof systems. By taking the sender to be a (potentially malicious) prover and the receiver to be a verifier, we show that (p-prover) interactive proofs with k rounds of interaction are equivalent to (p-prover) interactive proofs with k+O(1) rounds, where the verifier is in NC0. That is, each round of the verifiers computation can be implemented in constant parallel time. As a corollary, we obtain interactive proof systems, with (optimally) constant soundness, for languages in AM and NEXP, where the verifier runs in constant parallel-time. Another, less immediate sender-receiver setting arises in considering error correcting codes. By taking the sender to be a (potentially corrupted) codeword and the receiver to be a decoder, we obtain explicit families of codes that are locally (list-)decodable by constant-depth circuits of size polylogarithmic in the length of the codeword. Using the tight connection between locally list-decodable codes and average-case complexity, we obtain a new, more efficient, worst-case to average-case reduction for languages in EXP.


symposium on the theory of computing | 2016

Constant-round interactive proofs for delegating computation

Omer Reingold; Guy N. Rothblum; Ron D. Rothblum

The celebrated IP=PSPACE Theorem of Lund et-al. (J.ACM 1992) and Shamir (J.ACM 1992), allows an all-powerful but untrusted prover to convince a polynomial-time verifier of the validity of extremely complicated statements (as long as they can be evaluated using polynomial space). The interactive proof system designed for this purpose requires a polynomial number of communication rounds and an exponential-time (polynomial-space complete) prover. In this paper, we study the power of more efficient interactive proof systems. Our main result is that for every statement that can be evaluated in polynomial time and bounded-polynomial space there exists an interactive proof that satisfies the following strict efficiency requirements: (1) the honest prover runs in polynomial time, (2) the verifier is almost linear time (and under some conditions even sub linear), and (3) the interaction consists of only a constant number of communication rounds. Prior to this work, very little was known about the power of efficient, constant-round interactive proofs (rather than arguments). This result represents significant progress on the round complexity of interactive proofs (even if we ignore the running time of the honest prover), and on the expressive power of interactive proofs with polynomial-time honest prover (even if we ignore the round complexity). This result has several applications, and in particular it can be used for verifiable delegation of computation. Our construction leverages several new notions of interactive proofs, which may be of independent interest. One of these notions is that of unambiguous interactive proofs where the prover has a unique successful strategy. Another notion is that of probabilistically checkable interactive proofs (PCIPs) where the verifier only reads a few bits of the transcript in checking the proof (this could be viewed as an interactive extension of PCPs).

Collaboration


Dive into the Guy N. Rothblum's collaboration.

Top Co-Authors

Avatar

Shafi Goldwasser

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Moni Naor

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ron D. Rothblum

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Zvika Brakerski

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge