Sean W. O'Malley
University of Arizona
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sean W. O'Malley.
acm special interest group on data communication | 1994
Lawrence S. Brakmo; Sean W. O'Malley; Larry L. Peterson
Vegas is a new implementation of TCP that achieves between 40 and 70% better throughput, with one-fifth to one-half the losses, as compared to the implementation of TCP in the Reno distribution of BSD Unix. This paper motivates and describes the three key techniques employed by Vegas, and presents the results of a comprehensive experimental performance study—using both simulations and measurements on the Internet—of the Vegas and Reno implementations of TCP.
ACM Transactions on Computer Systems | 1992
Sean W. O'Malley; Larry L. Peterson
Network software is a critical component of any distributed system. Because of its complexity, network software is commonly layered into a hierarchy of protocols, or more generally, into a protocol graph. Typical protocol graphs—including those standardized in the ISO and TCP/IP network architectures—share three important properties; the protocol graph is simple, the nodes of the graph (protocols) encapsulate complex functionality, and the topology of the graph is relatively static. This paper describes a new way to organize network software that differs from conventional architectures in all three of these properties. In our approach, the protocol graph is complex, individual protocols encapsulate a single function, and the topology of the graph is dynamic. The main contribution of this paper is to describe the ideas behind our new architecture, illustrate the advantages of using the architecture, and demonstrate that the architecture results in efficient network software.
international cryptology conference | 1995
Richard Schroeppel; Hilarie K. Orman; Sean W. O'Malley; Oliver Spatscheck
The Diffie-Hellman key exchange algorithm can be implemented using the group of points on an elliptic curve over the field F2n. A software version of this using n = 155 can be optimized to achieve computation rates that are slightly faster than non-elliptic curve versions with a similar level of security. The fast computation of reciprocals in F2n is the key to the highly efficient implementation described here.
symposium on operating systems principles | 1989
Larry L. Peterson; Norman C. Hutchinson; Sean W. O'Malley; M. Abbott
This paper reports our experiences implementing remote procedure call (RPC) protocols in the x-kernel. This exercise is interesting because the RPC protocols exploit two novel design techniques: virtual protocols and layered protocols. These techniques are made possible because the x-kernel provides an object-oriented infrastructure that supports three significant features: a uniform interface to all protocols, a late binding between protocol layers, and a small overhead for invoking any given protocol layer. For each design technique, the paper motivates the technique with a concrete example, describes how it is applied to the implementation of RPC protocols, and presents the results of experiments designed to evaluate the technique.
acm special interest group on data communication | 1994
Sean W. O'Malley; Todd A. Proebsting; Allen Brady Montz
USC is a new stub compiler that generates stubs that perform many data conversion operations. USC is flexible and can be used in situations where previously only manual code generation was possible. USC generated code is up to 20 times faster than code generated by traditional argument marshaling schemes such as ASN.1 and Sun XDR. This paper presents the design of USC and a comprehensive set of experiments that compares USC performance with the best manually generated code and traditional stub compilers.
IEEE Computer | 1990
Larry L. Peterson; Norman C. Hutchinson; Sean W. O'Malley; Herman C. Rao
x-kernel is an experimental operating system for personal workstations that allows uniform access to resources throughout a nationwide internet: an interconnection of networks similar to the TCP/IP internet. This network is also called the National Research and Education Network (NREN). The x-kernel supports a library of protocols, and it accesses different resources with different protocol combinations. In addition, two user-level systems that give users an integrated and uniform interface to resources have been built on top of the x-kernel. These two systems-a file system and a command interpreter-hide differences among the underlying protocols.<<ETX>>
acm special interest group on data communication | 1996
David Mosberger; Larry L. Peterson; Patrick G. Bridges; Sean W. O'Malley
This paper describes several techniques designed to improve protocol latency, and reports on their effectiveness when measured on a modern RISC machine employing the DEC Alpha processor. We found that the memory system---which has long been known to dominate network throughput---is also a key factor in protocol latency. As a result, improving instruction cache effectiveness can greatly reduce protocol processing overheads. An important metric in this context is the memory cycles per instructions (mCPI), which is the average number of cycles that an instruction stalls waiting for a memory access to complete. The techniques presented in this paper reduce the mCPI by a factor of 1.35 to 5.8. In analyzing the effectiveness of the techniques, we also present a detailed study of the protocol processing behavior of two protocol stacks---TCP/IP and RPC---on a modern RISC processor.
IEEE ACM Transactions on Networking | 1997
Claude Castelluccia; Walid Dabbous; Sean W. O'Malley
A protocol compiler takes as input an abstract specification of a protocol and generates an implementation of that protocol. Protocol compilers usually produce inefficient code both in terms of code speed and code size. We show that the combination of two techniques makes it possible to build protocol compilers that generate efficient code. These techniques are: (i) the use of a compiler that generates from the specification a unique tree-shaped automation (rather than multiple independent automata) and (ii) the use of optimization techniques applied at the automation level, i.e., on the branches of the trees. We have developed a protocol compiler that uses both these techniques. The compiler takes as the input a protocol specification written in the synchronous language Esterel. The specification is compiled into a unique automation by the Esterel front end compiler. The automation is then optimized and converted into C code by our protocol optimizer called HIPPCO. HIPPCO improves the code performance and reduces the code size by simultaneously optimizing the performance of the common path and optimizing the size of the uncommon path. We evaluate the gain expected with our approach on a real-life example, namely a working subset of the TCP protocol generated from an Esterel specification. We compare the protocol code generated with our approach to that derived from the standard BSD TCP implementation. The results are very encouraging. HIPPCO-generated code executes up to 25% fewer instructions than the BSD code for input packet processing while only increasing the code size by 25%.
Archive | 1998
Steven R. Kleiman; David Hitz; Guy Harris; Sean W. O'Malley
operating systems design and implementation | 1999
Norman C. Hutchinson; Stephen L. Manley; Mike Federwisch; Guy Harris; Dave Hitz; Steven R. Kleiman; Sean W. O'Malley