Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adam D. Smith is active.

Publication


Featured researches published by Adam D. Smith.


SIAM Journal on Computing | 2008

Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other Noisy Data

Yevgeniy Dodis; Rafail Ostrovsky; Leonid Reyzin; Adam D. Smith

We provide formal definitions and efficient secure techniques for - turning biometric information into keys usable for any cryptographic application, and - reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a fuzzy extractor extracts nearly uniform randomness R from its biometric input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in any cryptographic application. A secure sketch produces public information about its biometric input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. In addition to formally introducing our new primitives, we provide nearly optimal constructions of both primitives for various measures of closeness of input data, such as Hamming distance, edit distance, and set difference.


international conference on mobile systems, applications, and services | 2004

Tracking moving devices with the cricket location system

Adam D. Smith; Hari Balakrishnan; Michel Goraczko; Nissanka Bodhi Priyantha

We study the problem of tracking a moving device under two indoor location architectures: an active mobile architecture and a passive mobile architecture. In the former, the infrastructure has receivers at known locations, which estimate distances to a mobile device based on an active transmission from the device. In the latter, the infrastructure has active beacons that periodically transmit signals to a passively listening mobile device, which in turn estimates distances to the beacons. Because the active mobile architecture receives simultaneous distance estimates at multiple receivers from the mobile device, it is likely to perform better tracking than the passive mobile system in which the device obtains only one distance estimate at a time and may have moved between successive estimates. However, an passive mobile system scales better with the number of mobile devices and puts users in control of whether their whereabouts are tracked.We answer the following question: How do the two architectures compare in tracking performance? We find that the active mobile architecture performs better at tracking, but that the passive mobile architecture has acceptable performance; moreover, we devise a hybrid approach that preserves the benefits of the passive mobile architecture while simultaneously providing the same performance as an active mobile system, suggesting a viable practical solution to the three goals of scalability, privacy, and tracking agility.


theory of cryptography conference | 2005

Toward privacy in public databases

Shuchi Chawla; Cynthia Dwork; Frank McSherry; Adam D. Smith; Hoeteck Wee

We initiate a theoretical study of the census problem. Informally, in a census individual respondents give private information to a trusted party (the census bureau), who publishes a sanitized version of the data. There are two fundamentally conflicting requirements: privacy for the respondents and utility of the sanitized data. Unlike in the study of secure function evaluation, in which privacy is preserved to the extent possible given a specific functionality goal, in the census problem privacy is paramount; intuitively, things that cannot be learned “safely” should not be learned at all. An important contribution of this work is a definition of privacy (and privacy compromise) for statistical databases, together with a method for describing and comparing the privacy offered by specific sanitization techniques. We obtain several privacy results using two different sanitization techniques, and then show how to combine them via cross training. We also obtain two utility results involving clustering.


SIAM Journal on Computing | 2011

What Can We Learn Privately

Shiva Prasad Kasiviswanathan; Homin K. Lee; Kobbi Nissim; Sofya Raskhodnikova; Adam D. Smith

Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask: what concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in the contexts where aggregate information is released about a database containing sensitive information about individuals. We present several basic results that demonstrate general feasibility of private learning and relate several models previously studied separately in the contexts of privacy and standard learning.


foundations of computer science | 2002

Authentication of quantum messages

Howard Barnum; Claude Crépeau; Daniel Gottesman; Adam D. Smith; Alain Tapp

Authentication is a well-studied area of classical cryptography: a sender A and a receiver B sharing a classical secret key want to exchange a classical message with the guarantee that the message has not been modified or replaced by a dishonest party with control of the communication line. In this paper we study the authentication of messages composed of quantum states. We give a formal definition of authentication in the quantum setting. Assuming A and B have access to an insecure quantum channel and share a secret, classical random key, we provide a non-interactive scheme that enables A to both encrypt and authenticate an m qubit message by encoding it into m+s qubits, where the error probability decreases exponentially in the security parameter s. The scheme requires a secret key of size 2m+O(s). To achieve this, we give a highly efficient protocol for testing the purity of shared EPR pairs. It has long been known that learning information about a general quantum state will necessarily disturb it. We refine this result to show that such a disturbance can be done with few side effects, allowing it to circumvent cryptographic protections. Consequently, any scheme to authenticate quantum messages must also encrypt them. In contrast, no such constraint exists classically. This reasoning has two important consequences: It allows us to give a lower bound of 2m key bits for authenticating m qubits, which makes our protocol asymptotically optimal. Moreover, we use it to show that digitally signing quantum states is impossible.


theory and application of cryptographic techniques | 2005

Secure remote authentication using biometric data

Xavier Boyen; Yevgeniy Dodis; Jonathan Katz; Rafail Ostrovsky; Adam D. Smith

Biometric data offer a potential source of high-entropy, secret information that can be used in cryptographic protocols provided two issues are addressed: (1) biometric data are not uniformly distributed; and (2) they are not exactly reproducible. Recent work, most notably that of Dodis, Reyzin, and Smith, has shown how these obstacles may be overcome by allowing some auxiliary public information to be reliably sent from a server to the human user. Subsequent work of Boyen has shown how to extend these techniques, in the random oracle model, to enable unidirectional authentication from the user to the server without the assumption of a reliable communication channel. We show two efficient techniques enabling the use of biometric data to achieve mutual authentication or authenticated key exchange over a completely insecure (i.e., adversarially controlled) channel. In addition to achieving stronger security guarantees than the work of Boyen, we improve upon his solution in a number of other respects: we tolerate a broader class of errors and, in one case, improve upon the parameters of his solution and give a proof of security in the standard model.


international cryptology conference | 2006

Robust fuzzy extractors and authenticated key agreement from close secrets

Yevgeniy Dodis; Jonathan Katz; Leonid Reyzin; Adam D. Smith

Consider two parties holding correlated random variables W and W′, respectively, that are within distance t of each other in some metric space. These parties wish to agree on a uniformly distributed secret key R by sending a single message over an insecure channel controlled by an all-powerful adversary. We consider both the keyless case, where the parties share no additional secret information, and the keyed case, where the parties share a long-term secret SK that they can use to generate a sequence of session keys {Rj} using multiple pairs {(Wj, W′j)}. The former has applications to, e.g., biometric authentication, while the latter arises in, e.g., the bounded storage model with errors. Our results improve upon previous work in several respects: – The best previous solution for the keyless case with no errors (i.e., t=0) requires the min-entropy of W to exceed 2|W|/3. We show a solution when the min-entropy of W exceeds the minimal threshold |W|/2. – Previous solutions for the keyless case in the presence of errors (i.e., t>0) required random oracles. We give the first constructions (for certain metrics) in the standard model. – Previous solutions for the keyed case were stateful. We give the first stateless solution.


symposium on the theory of computing | 2011

Privacy-preserving statistical estimation with optimal convergence rates

Adam D. Smith

Consider an analyst who wants to release aggregate statistics about a data set containing sensitive information. Using differentially private algorithms guarantees that the released statistics reveal very little about any particular record in the data set. In this paper we study the asymptotic properties of differentially private algorithms for statistical inference. We show that for a large class of statistical estimators T and input distributions P, there is a differentially private estimator AT with the same asymptotic distribution as T. That is, the random variables AT(X) and T(X) converge in distribution when X consists of an i.i.d. sample from P of increasing size. This implies that AT(X) is essentially as good as the original statistic T(X) for statistical inference, for sufficiently large samples. Our technique applies to (almost) any pair T,P such that T is asymptotically normal on i.i.d. samples from P---in particular, to parametric maximum likelihood estimators and estimators for logistic and linear regression under standard regularity conditions. A consequence of our techniques is the existence of low-space streaming algorithms whose output converges to the same asymptotic distribution as a given estimator T (for the same class of estimators and input distributions as above).


theory and application of cryptographic techniques | 2001

Efficient and Non-interactive Non-malleable Commitment

Giovanni Di Crescenzo; Jonathan Katz; Rafail Ostrovsky; Adam D. Smith

We present new constructions of non-malleable commitment schemes, in the public parameter model (where a trusted party makes parameters available to all parties), based on the discrete logarithm or RSA assumptions. The main features of our schemes are: they achieve near-optimal communication for arbitrarily-large messages and are noninteractive. Previous schemes either required (several rounds of) interaction or focused on achieving non-malleable commitment based on general assumptions and were thus efficient only when committing to a single bit. Although our main constructions are for the case of perfectly-hiding commitment, we also present a communication-efficient, noninteractive commitment scheme (based on general assumptions) that is perfectly binding.


IEEE Transactions on Information Theory | 2011

Leftover Hashing Against Quantum Side Information

Marco Tomamichel; Christian Schaffner; Adam D. Smith; Renato Renner

The Leftover Hash Lemma states that the output of a two-universal hash function applied to an input with sufficiently high entropy is almost uniformly random. In its standard formulation, the lemma refers to a notion of randomness that is (usually implicitly) defined with respect to classical side information. Here, a strictly more general version of the Leftover Hash Lemma that is valid even if side information is represented by the state of a quantum system is shown. Our result applies to almost two-universal families of hash functions. The generalized Leftover Hash Lemma has applications in cryptography, e.g., for key agreement in the presence of an adversary who is not restricted to classical information processing.

Collaboration


Dive into the Adam D. Smith's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sofya Raskhodnikova

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abhradeep Thakurta

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Gottesman

Perimeter Institute for Theoretical Physics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge