Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elizabeth A. M. Shriver is active.

Publication


Featured researches published by Elizabeth A. M. Shriver.


measurement and modeling of computer systems | 1998

An analytic behavior model for disk drives with readahead caches and request reordering

Elizabeth A. M. Shriver; Arif Merchant; John Wilkes

Modern disk drives read-ahead data and reorder incoming requests in a workload-dependent fashion. This improves their performance, but makes simple analytical models of them inadequate for performance prediction, capacity planning, workload balancing, and so on. To address this problem we have developed a new analytic model for disk drives that do readahead and request reordering. We did so by developing performance models of the disk drive components (queues, caches, and the disk mechanism) and a workload transformation technique for composing them. Our model includes the effects of workload-specific parameters such as request size and spatial locality. The result is capable of predicting the behavior of a variety of real-world devices to within 17% across a variety of workloads and disk drives.


Algorithmica | 1992

Algorithms for Parallel Memory II: Hierarchical Multilevel Memories

Jeffrey Scott Vitter; Elizabeth A. M. Shriver

In this paper we introduce parallel versions of two hierarchical memory models and give optimal algorithms in these models for sorting, FFT, and matrix multiplication. In our parallel models, there areP memory hierarchies operating simultaneously; communication among the hierarchies takes place at a base memory level. Our optimal sorting algorithm is randomized and is based upon the probabilistic partitioning technique developed in the companion paper for optimal disk sorting in a two-level memory with parallel block transfer. The probability of using/times the optimal running time is exponentially small in ι(log ι) logP.


IEEE Transactions on Information Theory | 2000

How to turn loaded dice into fair coins

Ari Juels; Markus Jakobsson; Elizabeth A. M. Shriver; Bruce Hillyer

We present a new technique for simulating fair coin flips using a biased, stationary source of randomness. Sequences of random numbers are of pervasive importance in cryptography and vital to many other computing applications. Many sources of randomness, such as radioactive or quantum-mechanical sources, possess the property of stationarity. In other words, they produce independent outputs over fixed probability distributions. The output of such sources may be viewed as the result of rolling a biased or loaded die. While a biased die may be a good source of entropy, many applications require input in the form of unbiased bits, rather than biased ones. For this reason, von Neumann (1951) presented a now well-known and extensively investigated technique for using a biased coin to simulate a fair coin. We describe a new generalization of von Neumanns algorithm distinguished by its high level of practicality and amenability to analysis. In contrast to previous efforts, we are able to prove our algorithm optimally efficient, in the sense that it simulates the maximum possible number of fair coin flips for a given number of die rolls. In fact, we are able to prove that in an asymptotic sense our algorithm extracts the full entropy of its input. Moreover, we demonstrate experimentally that our algorithm achieves a high level of computational and output efficiency in a practical setting.


measurement and modeling of computer systems | 1999

Modeling and optimizing I/O throughput of multiple disks on a bus

Rakesh D. Barve; Elizabeth A. M. Shriver; Phillip B. Gibbons; Bruce Hillyer; Yossi Matias; Jeffrey Scott Vitter

In modern I O architectures multiple disk drives are at tached to each I O controller A study of the performance of such architectures under I O intensive workloads has re vealed a performance impairment that results from a pre viously unknown form of convoy behavior in disk I O In this paper we describe measurements of the read perfor mance of multiple disks that share a SCSI bus under a heavy workload and develop and validate formulas that accurately characterize the observed performance to within on several platforms for I O sizes in the range KB Two terms in the formula clearly characterize the lost perfor mance seen in our experiments We describe techniques to deal with the performance impairment via user level work arounds that achieve greater overlap of bus transfers with disk seeks and that increase the percentage of transfers that occur at the full bus bandwidth rather than at the lower bandwidth of a disk head Experiments show bandwidth improvements of when using these user level tech niques but only in the case of large I Os


computer and communications security | 1998

A practical secure physical random bit generator

Markus Jakobsson; Elizabeth A. M. Shriver; Bruce Hillyer; Ari Juels

We sugg=t a practical and economical way to generate random bits using a computer disk drive * a source of randomn-. It requirw no additiond hardware (given a system with a disk), and no user involvement. As a concrete example of performance, on a Sun Wtra-1 with a Seagate Cheetah disk, it generatw bits at a rate of either 5 bits per minute or 577 bits per minute depending on the physical phenomena that we use = a source of randomness. The generated bits are random by a theoretical argument, and *O pass a severe battery of statiaticrd twts.


Bell Labs Technical Journal | 2002

Mobile web searching

Kit G. August; Mark Hansen; Elizabeth A. M. Shriver

Searching the Web on a mobile device such as a cell phone poses unique problems for the user that cannot be easily overcome through the interface. Given the restricted display of current wireless devices, users cannot efficiently examine search results when they are presented as a long list. In our system, Hyponym, we organize search results into topics and allow the user to explore relevant resources within each topic. The data to support this kind of analysis are drawn from sequences of Web pages that users have visited while searching. We have found that queries that lead users to request similar pages tend to be topically related. From this simple observation, we form clusters or groups of queries that are used to structure the search display. Hyponym can be thought of as a collaborative filtering application that takes as its input navigation data from previous Web-searching tasks. Because of this, the lists that we return within each topic or query group tend to be much more accurate than those recommended by standard search engines. In fact, Hyponym can significantly cut the number of pages a user visits while searching the Web. We are currently extending the Hyponym framework to support personalization and location-specific information.3


Archive | 1996

AN INTRODUCTION TO PARALLEL I/O MODELS AND ALGORITHMS

Elizabeth A. M. Shriver; Mark H. Nodine

Problems whose data are too large to fit into main memory are called out-of-core problems. Out-of-core parallel-I/O algorithms can handle much larger problems than in-memory variants and have much better performance than single-device variants. However, they are not commonly used—partly because the understanding of them is not widespread. Yet such algorithms ought to be growing in importance because they address the needs of users with ever-growing problem sizes and ever-increasing performance needs.


acm sigops european workshop | 2000

Let's put NetApp and CacheFlow out of business!

Eran Gabber; Elizabeth A. M. Shriver

We believe that a lightweight and portable specialized file system library can provide applications with performance close to that of special-built appliances running on closed proprietary operating systems. Moreover, the application may execute on commodity hardware with a general-purpose operating system, and with minimal changes to the application source code. Such a file system would allow anyone to build cheap, high-performance appliances. We present the design of Hummingbird, a file system for caching web proxies. Hummingbird is 6-11 times faster than a general-purpose file system when serving a web proxy cache.


measurement and modeling of computer systems | 1998

Modeling and optimizing I/O throughput of multiple disks on a bus (summary)

Rakesh D. Barve; Elizabeth A. M. Shriver; Phillip B. Gibbons; Bruce Hillyer; Yossi Matias; Jeffrey Scott Vitter

For a wide variety of computational tasks, disk I/O continues to be a serious obstacle to high performance. The focus of the present paper is on systems that use multiple disks per SCSI bus. We measured the performance of concurrent random I/Os, and observed bus-related phenomena that impair performance. We describe these phenomena, and present a new I/O performance model that accurately predicts the average bandwidth achieved by a heavy workload of random reads from disks on a SCSI bus. This model, although relatively simple, predicts performance on several platforms to within 12% for I/O sizes in the range 16-128 KB. We describe a technique to improve the I/O bandwidth by 10-20% for random-access workloads that have large I/Os and high concurrency. This technique increases the percentage of disk head positioning time that is overlapped with data transfers, and increases the percentage of transfers that occur at bus bandwidth, rather than at disk-head bandwidth.


Performance Evaluation | 2000

Performance Analysis of Storage Systems

Elizabeth A. M. Shriver; Bruce Hillyer; Abraham Silberschatz

By “performance analysis of a storage system,” we mean the application of a variety of approaches to predict, assess, evaluate, and explain the system’s performance characteristics, along dimensions such as throughput, latency, and bandwidth. Several approaches are commonly used. One approach is analytical modeling, which is the writing of equations that predict performance variables as a function of parameters of the workload, equipment, and system configuration. Another approach is to collect measurements of a running system, and to observe the relationship between characteristics of the workload and the system components, and the resulting performance measurements. A third approach is simulation, in which a computer program implements a simplified representation of the behavior of the components of the storage system, and then a synthetic or actual workload is applied to the simulation program, so that the performance of the simulated components and system can be measured. Trace-driven simulation is an approach that controls a simulation model by feeding in a trace—a sequence of specific events at specific time intervals. The trace is typically obtained by collecting measurements from an actual running system.

Collaboration


Dive into the Elizabeth A. M. Shriver's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Hansen

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wee Teck Ng

University of Michigan

View shared research outputs
Researchain Logo
Decentralizing Knowledge