Micah Beck
University of Tennessee
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Micah Beck.
acm special interest group on data communication | 2002
Micah Beck; Terry Moore; James S. Plank
This paper discusses the application of end-to-end design principles, which are characteristic of the architecture of the Internet, to network storage. While putting storage into the network fabric may seem to contradict end-to-end arguments, we try to show not only that there is no contradiction, but also that adherence to such an approach is the key to achieving true scalability of shared network storage. After discussing end-to-end arguments with respect to several properties of network storage, we describe the Internet Backplane Protocol and the exNode, which are tools that have been designed to create a network storage substrate that adheres to these principles. The name for this approach is Logistical Networking, and we believe its use is fundamental to the future of truly scalable communication.
Software - Practice and Experience | 1999
James S. Plank; Yuqun Chen; Kai Li; Micah Beck; Gerry Kingsley
Checkpointing systems are a convenient way for users to make their programs fault‐tolerant by intermittently saving program state to disk and restoring that state following a failure. The main concern with checkpointing is the overhead that it adds to running time of the program. This paper describes memory exclusion, an important class of optimizations that reduce the overhead of checkpointing. Some forms of memory exclusion are well‐known in the checkpointing community. Others are relatively new. In this paper, we describe all of them within the same framework. We have implemented these optimization techniques in two checkpointers: libckpt, which works on Unix‐based workstations, and CLIP, which works on the Intel Paragon. Both checkpointers are publicly available at no cost. We have checkpointed various long‐running applications with both checkpointers and have explored the performance improvements that may be gained through memory exclusion. Results from these experiments are presented and show the improvements in time and space overhead. Copyright
Future Generation Computer Systems | 1999
Micah Beck; Jack J. Dongarra; Graham E. Fagg; G. Al Geist; Paul A. Gray; James Arthur Kohl; Mauro Migliardi; Keith Moore; Terry Moore; Philip Papadopoulous; Stephen L. Scott; Vaidy S. Sunderam
Abstract Heterogeneous Adaptable Reconfigurable Networked SystemS (HARNESS) is an experimental metacomputing system [L. Smarr, C.E. Catlett, Communications of the ACM 35 (6) (1992) 45–52] built around the services of a highly customizable and reconfigurable Distributed Virtual Machine (DVM). The successful experience of the HARNESS design team with the Parallel Virtual Machine (PVM) project has taught us both the features which make the DVM model so valuable to parallel programmers and the limitations imposed by the PVM design. HARNESS seeks to remove some of those limitations by taking a totally different approach to creating and modifying a DVM.
Computer Networks and Isdn Systems | 1998
Micah Beck; Terry Moore
Abstract The strategy of the Internet2-distributed Storage Infrastructure is to promote the development of innovative applications by focusing on Internet channels that enable differential investment in infrastructure to support specific services and content. In our sense a channel is a collection of content which can be transparently delivered to end user communities at a chosen cost/performance point through a flexible, policy-based application of resources. Three key elements of this approach are focusing on collections of content, policy-based application of resources through replication, and transparent delivery to end users. The resulting architectural developments may be as important to the commodity Internet as to next generation applications.
Future Generation Computer Systems | 2003
Alessandro Bassi; Micah Beck; Terry Moore; James S. Plank; D. Martin Swany; Richard Wolski; Graham E. Fagg
In this work we present the Internet Backplane Protocol (IBP), a middleware created to allow the sharing of storage resources, implemented as part of the network fabric. IBP allows an application to control intermediate data staging operations explicitly. As IBP follows a very simple philosophy, very similar to the Internet Protocol, and the resulting semantic might be too weak for some applications, we introduce the exNode, a data structure that aggregates storage allocations on the Internet.
Parallel Processing Letters | 2003
James S. Plank; Scott Atchley; Ying Ding; Micah Beck
As peer-to-peer and wide-area storage systems become in vogue, the issue of delivering content that is cached, partitioned and replicated in the wide area, with high performance, becomes of great importance. This paper explores three algorithms for such downloads. The storage model is based on the Network Storage Stack, which allows for flexible sharing and utilization of writable storage as a network resource. The algorithms assume that data is replicated in various storage depots in the wide area, and the data must be delivered to the client either as a downloaded file or as a stream to be consumed by an application, such as a media player. The algorithms are threaded and adaptive, attempting to get good performance from nearby replicas, while still utilizing the faraway replicas. After defining the algorithms, we explore their performance downloading a 50 MB file replicated on six storage depots in the U.S., Europe and Asia, to two clients in different parts of the U.S. One algorithm, called progress-driven redundancy, exhibits excellent performance characteristics for both file and streaming downloads.
grid computing | 2004
Viraj Bhat; Scott Klasky; Scott Atchley; Micah Beck; Douglas McCune; Manish Parashar
We have developed a threaded parallel data streaming approach using logistical networking (LN) to transfer multiterabyte simulation data from computers at NERSC to our local analysis/visualization cluster, as the simulation executes, with negligible overhead. Data transfer experiments show that this concurrent data transfer approach is more favorable compared with writing to local disk and later transferring this data to be post-processed. Our algorithms are network aware, and can stream data at up to 97 Mbs on a 100 Mbs link from CA to NJ during a live simulation, using less than 5% CPU overhead at NERSC. This method is the first step in setting up a pipeline for simulation workflow and data management.
acm special interest group on data communication | 2003
Micah Beck; Terry Moore; James S. Plank
The three fundamental resources underlying Information Technology are bandwidth, storage, and computation. The goal of wide area infrastructure is to provision these resources to enable applications within a community. The end-to-end principles provide a scalable approach to the architecture of the shared services on which these applications depend. As a prime example, IP and the Internet resulted from the application of these principles to bandwidth resources. A similar application to storage resources produced the Internet Backplane Protocol and Logistical Networking, which implements a scalable approach to wide area network storage. In this paper, we discuss the use of this paradigm for the design of a scalable service for wide area computation, or programmable networking. While it has usually been assumed that providing computational services in the network will violate the end-to-end principles, we show that this assumption does not hold. We illustrate the point by describing Logistical Network Computing, an extension to Logistical Networking that supports limited computation at intermediate nodes.
cluster computing and the grid | 2002
Alessandro Bassi; Micah Beck; Graham E. Fagg; Terry Moore; James S. Plank; D. Martin Swany; Richard Wolski
In this work we present the Internet Backplane Protocol (IBP), a middleware created to allow the sharing of storage resources, implemented as part of the network fabric. IBP allows an application to control intermediate data staging operations explicitly. As IBP follows a very simple philosophy, very similar to the Internet Protocol, and the resulting semantic might be too weak for some applications, we introduce the exNode, a data structure that aggregates storage allocations on the Internet.
Future Generation Computer Systems | 1999
James S. Plank; Henri Casanova; Micah Beck; Jack J. Dongarra
Computational power grids are computing environments with massive resources for processing and storage. While these resources may be pervasive, harnessing them is a major challenge for the average user. NetSolve is a software environment that addresses this concern. A fundamental feature of NetSolve is its integration of fault-tolerance and task migration in a way that is transparent to the end user. In this paper, we discuss how NetSolve’s structure allows for the seamless integration of fault-tolerance and migration in grid applications, and present the specific approaches that have been and are currently being implemented within NetSolve.