Benjamin Fuller
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Benjamin Fuller.
international cryptology conference | 2013
Benjamin Fuller; Xianrui Meng; Leonid Reyzin
Fuzzy extractors derive strong keys from noisy sources. Their security is defined information-theoretically, which limits the length of the derived key, sometimes making it too short to be useful. We ask whether it is possible to obtain longer keys by considering computational security, and show the following.
international cryptology conference | 2016
Ran Canetti; Benjamin Fuller; Omer Paneth; Leonid Reyzin; Adam D. Smith
Fuzzy extractors Dodis et al., Eurocrypt 2004 convert repeated noisy readings of a secret into the same uniformly distributed key. To eliminate noise, they require an initial enrollment phase that takes the first noisy reading of the secret and produces a nonsecret helper string to be used in subsequent readings. Reusable fuzzy extractors Boyen, CCS 2004 remain secure even when this initial enrollment phase is repeated multiple times with noisy versions of the same secret, producing multiple helper strings for example, when a single persons biometric is enrolled with multiple unrelated organizations. We construct the first reusable fuzzy extractor that makes no assumptions about how multiple readings of the source are correlated the only prior construction assumed a very specific, unrealistic class of correlations. The extractor works for binary strings with Hamming noise; it achieves computational security under assumptions on the security of hash functions or in the random oracle model. It is simple and efficient and tolerates near-linear error rates. Our reusable extractor is secure for source distributions of linear min-entropy rate. The construction is also secure for sources with much lower entropy rates--lower than those supported by prior nonreusable constructions--assuming that the distribution has some additional structure, namely, that random subsequences of the source have sufficient minentropy. We show that such structural assumptions are necessary to support low entropy rates. We then explore further how different structural properties of a noisy source can be used to construct fuzzy extractors when the error rates are high, building a computationally secure and an information-theoretically secure construction for large-alphabet sources.
ieee symposium on security and privacy | 2017
Benjamin Fuller; Mayank Varia; Arkady Yerukhimovich; Emily Shen; Ariel Hamlin; Vijay Gadepally; Richard Shay; John Darby Mitchell; Robert K. Cunningham
Protected database search systems cryptographically isolate the roles of reading from, writing to, and administering the database. This separation limits unnecessary administrator access and protects data in the case of system breaches. Since protected search was introduced in 2000, the area has grown rapidly, systems are offered by academia, start-ups, and established companies. However, there is no best protected search system or set of techniques. Design of such systems is a balancing act between security, functionality, performance, and usability. This challenge is made more difficult by ongoing database specialization, as some users will want the functionality of SQL, NoSQL, or NewSQL databases. This database evolution will continue, and the protected search community should be able to quickly provide functionality consistent with newly invented databases. At the same time, the community must accurately and clearly characterize the tradeoffs between different approaches. To address these challenges, we provide the following contributions:1) An identification of the important primitive operations across database paradigms. We find there are a small number of base operations that can be used and combined to support a large number of database paradigms.2) An evaluation of the current state of protected search systems in implementing these base operations. This evaluation describes the main approaches and tradeoffs for each base operation. Furthermore, it puts protected search in the context of unprotected search, identifying key gaps in functionality.3) An analysis of attacks against protected search for different base queries.4) A roadmap and tools for transforming a protected search system into a protected database, including an open-source performance evaluation platform and initial user opinions of protected search.
network computing and applications | 2010
Joseph A. Cooley; Roger I. Khazan; Benjamin Fuller; Galen Pickard
We have designed and implemented a general- purpose cryptographic building block, called GROK, for securing communication among groups of entities in networks composed of high-latency, low-bandwidth, intermittently connected links. During the process, we solved a number of non-trivial system problems. This paper describes these problems and our solutions, and motivates and justifies these solutions from three viewpoints: usability, efficiency, and security. The solutions described in this paper have been tempered by securing a widely-used group-oriented application, group text chat. We implemented a prototype extension to a popular text chat client called Pidgin and evaluated it in a real-world scenario. Based on our experiences, these solutions are useful to designers of group-oriented systems specifically, and secure systems in general.
hardware-oriented security and trust | 2014
Merrielle Spain; Benjamin Fuller; Kyle Ingols; Robert K. Cunningham
Weak physical unclonable functions (PUFs) can instantiate read-proof hardware tokens (Tuyls et al. 2006, CHES) where benign variation, such as changing temperature, yields a consistent key, but invasive attempts to learn the key destroy it. Previous approaches evaluate security by measuring how much an invasive attack changes the derived key (Pappu et al. 2002, Science). If some attack insufficiently changes the derived key, an expert must redesign the hardware. An unexplored alternative uses software to enhance token response to known physical attacks. Our approach draws on machine learning. We propose a variant of linear discriminant analysis (LDA), called PUF LDA, which reduces noise levels in PUF instances while enhancing changes from known attacks. We compare PUF LDA with standard techniques using an optical coating PUF and the following feature types: raw pixels, fast Fourier transform, short-time Fourier transform, and wavelets. We measure the true positive rate for valid detection at a 0% false positive rate (no mistakes on samples taken after an attack). PUF LDA improves the true positive rate from 50% on average (with a large variance across PUFs) to near 100%. While a well-designed physical process is irreplaceable, PUF LDA enables system designers to improve the PUF reliability-security tradeoff by incorporating attacks without redesigning the hardware token.
visualization for computer security | 2008
Tamara Yu; Benjamin Fuller; John H. Bannick; Lee M. Rossey; Robert K. Cunningham
Network testbeds are indispensable for developing and testing information operations (IO) technologies. Lincoln Laboratory has been developing LARIAT to support IO test design, development, and execution with high-fidelity user simulations. As LARIAT becomes more advanced, enabling larger and more realistic and complex tests, effective management software has proven essential. In this paper, we present the Director, a graphical user interface that enables experimenters to quickly define, control, and monitor reliable IO tests on a LARIAT testbed. We describe how the interface simplifies these key elements of testbed operation by providing the experimenter with an appropriate system abstraction, support for basic and advanced usage, scalable performance and visualization in large networks, and interpretable and correct feedback.
international conference on the theory and application of cryptology and information security | 2016
Benjamin Fuller; Leonid Reyzin; Adam D. Smith
Fuzzy extractors (Dodis et al., Eurocrypt 2004) convert repeated noisy readings of a high-entropy secret into the same uniformly distributed key. A minimum condition for the security of the key is the hardness of guessing a value that is similar to the secret, because the fuzzy extractor converts such a guess to the key.
Journal of Cryptology | 2015
Benjamin Fuller; Adam O'Neill; Leonid Reyzin
This paper addresses deterministic public-key encryption schemes (DE), which are designed to provide meaningful security when only source of randomness in the encryption process comes from the message itself. We propose a general construction of DE that unifies prior work and gives novel schemes. Specifically, its instantiations include: The first construction from any trapdoor function that has sufficiently many hardcore bits.The first construction that provides “bounded” multi-message security (assuming lossy trapdoor functions). The security proofs for these schemes are enabled by three tools that are of broader interest: A weaker and more precise sufficient condition for semantic security on a high-entropy message distribution. Namely, we show that to establish semantic security on a distribution M of messages, it suffices to establish indistinguishability for all conditional distribution M|E, where E is an event of probability at least 1/4. (Prior work required indistinguishability on all distributions of a given entropy.)A result about computational entropy of conditional distributions. Namely, we show that conditioning on an event E of probability p reduces the quality of computational entropy by a factor of p and its quantity by log21/p.A generalization of leftover hash lemma to correlated distributions. We also extend our result about computational entropy to the average case, which is useful in reasoning about leakage-resilient cryptography: leaking λ bits of information reduces the quality of computational entropy by a factor of 2λ and its quantity by λ.
international conference on information theoretic security | 2015
Benjamin Fuller; Ariel Hamlin
Leakage resilient cryptography designs systems to withstand partial adversary knowledge of secret state. Ideally, leakage-resilient systems withstand current and future attacks; restoring confidence in the security of implemented cryptographic systems. Understanding the relation between classes of leakage functions is an important aspect.
IEEE Signal Processing Magazine | 2015
Gene Itkis; Venkat Chandar; Benjamin Fuller; Joseph P. Campbell; Robert K. Cunningham
Biometrics were originally developed for identification, such as for criminal investigations. More recently, biometrics have been also utilized for authentication. Most biometric authentication systems today match a user?s biometric reading against a stored reference template generated during enrollment. If the reading and the template are sufficiently close, the authentication is considered successful and the user is authorized to access protected resources. This binary matching approach has major inherent vulnerabilities.