Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David A. McGrew is active.

Publication


Featured researches published by David A. McGrew.


international conference on cryptology in india | 2004

The security and performance of the galois/counter mode (GCM) of operation

David A. McGrew; John Viega

The recently introduced Galois/Counter Mode (GCM) of operation for block ciphers provides both encryption and message authentication, using universal hashing based on multiplication in a binary finite field. We analyze its security and performance, and show that it is the most efficient mode of operation for high speed packet networks, by using a realistic model of a network crypto module and empirical data from studies of Internet traffic in conjunction with software experiments and hardware designs. GCM has several useful features: it can accept IVs of arbitrary length, can act as a stand-alone message authentication code (MAC), and can be used as an incremental MAC. We show that GCM is secure in the standard model of concrete security, even when these features are used. We also consider several of its important system-security aspects.


fast software encryption | 2000

Statistical Analysis of the Alleged RC4 Keystream Generator

Scott R. Fluhrer; David A. McGrew

The alleged RC4 keystream generator is examined, and a method of explicitly computing digraph probabilities is given. Using this method, we demonstrate a method for distinguishing 8-bit RC4 from randomness. Our method requires less keystream output than currently published attacks, requiring only 230:6 bytes of output. In addition, we observe that an attacker can, on occasion, determine portions of the internal state with nontrivial probability. However, we are currently unable to extend this observation to a full attack.


international conference on selected areas in cryptography | 2007

The security of the extended codebook (XCB) mode of operation

David A. McGrew; Scott R. Fluhrer

The XCB mode of operation was outlined in 2004 as a contribution to the IEEE Security in Storage effort, but no security analysis was provided. In this paper, we provide a proof of security for XCB, and show that it is a secure tweakable (super) pseudorandom permutation. Our analysis makes several new contributions: it uses an algebraic property of XCBs internal universal hash function to simplify the proof, and it defines a nonce mode inwhich XCB can be securely used even when the plaintext is shorter than twice the width of the underlying block cipher. We also show minor modifications that improve the performance of XCB and make it easier to analyze. XCB is interesting because it is highly efficient in both hardware and software, it has no alignment restrictions on input lengths, it can be used in nonce mode, and it uses the internal functions of the Galois/Counter Mode (GCM) of operation, which facilitates design re-use and admits multi-purpose implementations.


fast software encryption | 2014

Pipelineable On-line Encryption

Farzaneh Abed; Scott R. Fluhrer; Christian Forler; Eik List; Stefan Lucks; David A. McGrew; Jakob Wenzel

Correct authenticated decryption requires the receiver to buffer the decrypted message until the authenticity check has been performed. In high-speed networks, which must handle large message frames at low latency, this behavior becomes practically infeasible. This paper proposes CCA-secure on-line ciphers as a practical alternative to AE schemes since the former provide some defense against malicious message modifications. Unfortunately, all published on-line ciphers so far are either inherently sequential, or lack a CCA-security proof.


selected areas in cryptography | 2000

Attacks on Additive Encryption of Redundant Plaintext and Implications on Internet Security

David A. McGrew; Scott R. Fluhrer

We present and analyze attacks on additive stream ciphers that rely on linear equations that hold with non-trivial probability in plaintexts that are encrypted using distinct keys. These attacks extend Bihams key collision attack and Hellmans time memory tradeoff attack, and can be applied to any additive stream cipher. We define linear redundancy to characterize the vulnerability of a plaintext source to these attacks. We show that an additive stream cipher with an n-bit key has an effective key size of n-min(l, lgM) against the key collision attack, and of 2n/3+ lg(n/3) + max(n - l, 0) against the time memory tradeoff attack, when the the attacker knows l linear equations over the plaintext and has M ciphertexts encrypted with M distinct unknown secret keys. Lastly, we analyze the IP, TCP, and UDP protocols and some typical protocol constructs, and show that they contain significant linear redundancy. We conclude with observations on the use of stream ciphers for Internet security.


Third IEEE International Security in Storage Workshop (SISW'05) | 2005

Efficient authentication of large, dynamic data sets using Galois/counter mode (GCM)

David A. McGrew

The Galois/counter mode (GCM) of operation can be used as an incremental message authentication code (MAC); in this respect, it is unique among the crypto algorithms used in practice. We show that it has this property, and show how to efficiently recompute a MAC after small changes within a message, after the appending or prepending of data to a message, or the truncation of data from the start or end of a message. Incremental MACs have great utility for protecting data at rest. In particular, they can be used to protect a large, dynamic data set using only a small, constant amount of memory


Information Processing Letters | 2005

Minimizing center key storage in hybrid one-way function based group key management with communication constraints

Mingyan Li; Radha Poovendran; David A. McGrew

We study the problem of designing a storage efficient secure multicast key management scheme based on one-way function trees (OFT) for a prespecified key update communication overhead. Canetti, Malkin and Nissim presented a hybrid model that divides a group of N members into clusters of M members and assigns each cluster to one leaf node of a key, tree. Using the model, we formulate a constrained optimization problem to minimize the center storage in terms of the cluster size M. Due to the monotonicity of the center storage with respect to M, we convert the constrained optimization into a fixed point equation and derive the optimal M* explicitly. We show that the asymptotic value of the optimal M*, given as µ + a-1/logea loge µ with µ = O(log N) and a being the degree of a key tree, leads to the mini real storage as O (N/logN), when the update communication constraint is given as O(log N). We present an explicit design algorithm that achieves minimal center storage for a given update communication constraint.


International Conference on Research in Security Standardisation | 2016

State Management for Hash-Based Signatures

David A. McGrew; Panos Kampanakis; Scott R. Fluhrer; Stefan-Lukas Gazdag; Denis Butin; Johannes A. Buchmann

The unavoidable transition to post-quantum cryptography requires dependable quantum-safe digital signature schemes. Hash-based signatures are well-understood and promising candidates, and the object of current standardization efforts. In the scope of this standardization process, the most commonly raised concern is statefulness, due to the use of one-time signature schemes. While the theory of hash-based signatures is mature, a discussion of the system security issues arising from the concrete management of their state has been lacking. In this paper, we analyze state management in N-time hash-based signature schemes, considering both security and performance, and categorize the security issues that can occur due to state synchronization failures. We describe a state reservation and nonvolatile storage, and show that it can be naturally realized in a hierarchical signature scheme. To protect against unintentional copying of the private key state, we consider a hybrid stateless/stateful scheme, which provides a graceful security degradation in the face of unintentional copying, at the cost of increased signature size. Compared to a completely stateless scheme, the hybrid approach realizes the essential benefits, with smaller signatures and faster signing.


IEEE Journal on Selected Areas in Communications | 2006

A High-Speed Hardware Architecture for Universal Message Authentication Code

Bo Yang; Ramesh Karri; David A. McGrew

We present an architecture level optimization technique called divide-and-concatenate based on two observations: 1) the area of an array multiplier and its associated data path decreases quadratically and their delay decreases linearly as their operand size is reduced and 2) in universal hash functions and their associated message authentication codes, two one-way hash functions are equivalent if they have the same collision probability property. In the proposed approach, we divide a 2w-bit data path (with collision probability 2-2w) into two w-bit data paths (each with collision probability 2-w) and concatenate their results to construct an equivalent 2w-bit data path (with a collision probability 2-2w ). We applied this technique on NH universal hash, a universal hash function that uses multiplications and additions. We implemented the straightforward 32-bit pipelined NH universal hash data path and the divide-and-concatenate architecture that uses four equivalent 8-bit divide-and-concatenate NH universal hash data paths on a Xilinx Virtex II XC2VP7-7 field programmable gate array (FPGA) device. This divide-and-concatenate architecture yielded a 94% increase in throughput with only 40% hardware overhead. Finally, the implementation of universal message authentication code (UMAC) with collision probability 2-32 using the divide-and-concatenate NH hash as a building block yielded a throughput of 79.2 Gb/s with only 3840 Virtex II XC2VP7-7 FPGA slices


design automation conference | 2004

Divide-and-concatenate: an architecture level optimization technique for universal hash functions

Bo Yang; Ramesh Karri; David A. McGrew

The authors present an architectural optimization technique called divide-and-concatenate for hardware architectures of universal hash functions based on three observations: 1) the area of a multiplier and associated data path decreases quadratically and their speeds increase gradually as their operand size is reduced; 2) multiplication is at the core of universal hash functions and multipliers consume most of the area of universal hash function hardware; and 3) two universal hash functions are equivalent if they have the same collision-probability property. In the proposed approach, the authors divide a 2w-bit data path (with collision probability 2/sup -2w/) into two w-bit data paths (each with collision probability 2/sup -w/), apply one message word to these two w-bit data paths and concatenate their results to construct an equivalent 2w-bit data path (with a collision probability 2/sup -2w/). The divide-and-concatenate technique is complementary to all circuit-, logic-, and architecture-optimization techniques. The authors applied this technique on a linear congruential universal hash (LCH) family. When compared to the 100% overhead associated with duplicating a straightforward 32-bit LCH data path, the divide-and-concatenate approach that uses four equivalent 8-bit data paths yields a 101% increase in throughput with only 52% hardware overhead.

Collaboration


Dive into the David A. McGrew's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge