Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard W. Watson is active.

Publication


Featured researches published by Richard W. Watson.


mobile adhoc and sensor systems | 1995

The parallel I/O architecture of the high-performance storage system (HPSS)

Richard W. Watson; Robert A. Coyne

Datasets up to terabyte size and petabyte total capacities have created a serious imbalance between I/O and storage-system performance and system functionality. One promising approach is the use of parallel data-transfer techniques for client access to storage, peripheral-to-peripheral transfers, and remote file transfers. This paper describes the parallel I/O architecture and mechanisms, parallel transport protocol (PTP), parallel FTP, and parallel client application programming interface (API) used by the high-performance storage system (HPSS). Parallel storage integration issues with a local parallel file system are also discussed.


ACM Transactions on Computer Systems | 1987

Gaining efficiency in transport services by appropriate design and implementation choices

Richard W. Watson; Sandra A. Mamrak

End-to-end transport protocols continue to be an active area of research and development involving (1) design and implementation of special-purpose protocols, and (2) reexamination of the design and implementation of general-purpose protocols. This work is motivated by the perceived low bandwidth and high delay, CPU, memory, and other costs of many current general-purpose transport protocol designs and implementations. This paper examines transport protocol mechanisms and implementation issues and argues that general-purpose transport protocols can be effective in a wide range of distributed applications because (1) many of the mechanisms used in the special-purpose protocols can also be used in general-purpose protocol designs and implementations, (2) special-purpose designs have hidden costs, and (3) very special operating system environments, overall system loads, application response times, and interaction patterns are required before general-purpose protocols are the main system performance bottlenecks.


conference on high performance computing (supercomputing) | 1992

Storage systems for national information assets

Robert A. Coyne; Harry Hulen; Richard W. Watson

An industry-led collaborative project, called the National Storage Laboratory (NSL), has been organized to investigate technology for storage systems that will be the future repositories for the national information assets. Lawrence Livermore National Laboratory through its National Energy Research Supercomputer Center (NERSC) is the operational site and the provider of applications. It is anticipated that the integrated testbed system will represent a significant advance in the technology for distributed storage systems capable of handling gigabyte class files at gigabit-per-second data rates. The NSL collaboration is undertaking research in four areas: network-attached storage; multiple, dynamic, distributed storage hierarchies; layered access to storage system services; and storage system management. An overview of the prototype storage system is given. Three application domains have been chosen to test and demonstrate the systems effect on scientific productivity; climatic models, magnetic fusion energy models, and digital imaging.<<ETX>>


Computer Networks | 1981

Timer-based mechanisms in reliable transport protocol connection management☆

Richard W. Watson

Abstract There is a need for timer-based mechanisms (in addition to retransmission timers) to achieve reliable connection management in transport protocols designed to operate in a general network (datagram and internetwork) environment where packets can get lost, duplicated, or missequenced. This need is illustrated by discussing the timer mechanisms or assumptions (1) in the Department of Defense Transmission Control Protocol, initially designed using only a message exchange mechanism; and (2) in the Lawrence Livermore Laboratory Delta-t protocol, designed explicitly to be timer based. Some of the implementation and service implications of the approaches are discussed. The bounding of maximum packet lifetime and related parameters is important for achieving transport protocol reliability and a mechanism is outlined for enforcing such a bound.


conference on high performance computing (supercomputing) | 1993

The High Performance Storage System

Robert A. Coyne; Harry Hulen; Richard W. Watson

The National Storage Laboratory (NSL) was organized to develop, demonstrate and commercialize technology for the storage systems that will be the future repositories for the national information assets. Within the NSL four Department of Energy laboratories and IBM Federal Systems Company pooled their resources to develop an entirely new High Performance Storage System (HPSS). The HPSS project concentrates on scalable parallel storage systems for highly parallel computers as well as traditional supercomputers and workstation clusters. Concentrating on meeting the high end of storage system and data management requirements, HPSS is designed using network-connected storage devices to transfer data at rates of 100 million bytes per second and beyond. The resulting products will be portable to many vendors platforms. The three year project is targeted to be complete in 1995. This paper provides an overview of the requirements, design issues, and architecture of HPSS, as well as a description of the distributed, multi-organization industry and national laboratory HPSS project.


local computer networks | 1989

The Delta-t transport protocol: features and experience

Richard W. Watson

With the advent of new high performance networks and distribution systems there is renewed interest in transport protocol designs that can support both request/response and stream styles of communication. The author examines Delta-t, a transport protocol designed to meet such goals. Delta-ts main contribution is in the area of connection management, where it achieves hazard-free connection management without explicit packet exchanges. The author reviews Delta-ts features and connection management in general and outlines some lessons useful for the implementation of high-performance networks.<<ETX>>


mobile adhoc and sensor systems | 1995

Analysis of striping techniques in robotic storage libraries

Leana Golubchik; Richard R. Muntz; Richard W. Watson

In recent years advances in computational speed have been the main focus of research and development in high performance computing. In comparison, the improvement in I/O performance has been modest. Faster processing speeds have created a need for faster I/O as well as for the storage and retrieval of vast amounts of data. The technology needed to develop these mass storage systems exists today. Robotic storage libraries are vital components of such systems. However, they normally exhibit high latency and long transmission times. We analyze the performance of robotic storage libraries and study striping as a technique for improving response time. Although striping has been extensively studied in the content of disk arrays, the architectural differences between robotic storage libraries and arrays of disks suggest that a separate study of striping techniques in such libraries would be beneficial.


Computer Networks | 1980

An architecture for support of network operating system services

Richard W. Watson; John G. Fletcher

Abstract This paper argues that network architectures should be designed with the explicit purpose of creating a coherent network operating system (NOS). The resulting NOS must be capable of efficient implementation as the base (native) operating system on a given machine or machines, or of being layered on top of existing operating systems as a guest system. The goals and elements of a network architecture to support a NOS are outlined. This architecture consists of a NOS model and three layers of protocol: an interprocess communication (IPC) layer, with an end-end protocol and lower sub-layer protocols as needed to support reliable uninterpreted message communication; a service support layer (SSL), abstracting logical structures and needs common to most services, including naming, protection, request/reply structure, and data-type translation, error control; and a layer of standard services, (file, directory, terminal, process, clock, etc.).


Computer Networks | 1978

Mechanisms for a reliable timer-based protocol☆

John G. Fletcher; Richard W. Watson

Abstract Timer-based protocol mechanisms are developed for reliable and efficient transmission of both single-message and message-stream traffic. That is, correct data delivery is assured in the face of lost, damaged, duplicate, and out-of-sequence packets. The protocol mechanisms seem particularly useful in a high-speed local network environment. Current reliable protocol design approaches are not well suited for single-message modes of communication appropriate, for example, to distributed network operating systems. The timer intervals that must be maintained for sender and receiver are developed along with the rules for timer operation, packet acceptance, and connection opening and closing. The underlying assumptions about network characteristics required for the timer-based approach to work correctly are discussed, particularly that maximum packet lifetime can be bounded. The timer-based mechanisms are compared with mechanisms designed to deal with the same problems using the exchange of multiple messages to open and close logical connections or virtual circuits.


ieee conference on mass storage systems and technologies | 2005

High performance storage system scalability: architecture, implementation and experience

Richard W. Watson

The high performance storage system (HPSS) provides scalable hierarchical storage management (HSM), archive, and file system services. Its design, implementation and current dominant use are focused on HSM and archive services. It is also a general-purpose, global, shared, parallel file system, potentially useful in other application domains. When HPSS design and implementation began over a decade ago, scientific computing power and storage capabilities at a site, such as a DOE national laboratory, was measured in a few 10 s of gigaops, data archived in HSMs in a few 10 s of terabytes at most, data throughput rates to an HSM in a few megabytes/s, and daily throughput with the HSM in a few gigabytes/day. At that time, the DOE national laboratories and IBM HPSS design team recognized that we were headed for a data storage explosion driven by computing power rising to teraops/petaops requiring data stored in HSMs to rise to petabytes and beyond, data transfer rates with the HSM to rise to gigabytes/s and higher, and daily throughput with a HSM in 10 s of terabytes/day. This paper discusses HPSS architectural, implementation and deployment experiences that contributed to its success in meeting the above orders of magnitude scaling targets. We also discuss areas that need additional attention as we continue significant scaling into the future.

Collaboration


Dive into the Richard W. Watson's collaboration.

Top Co-Authors

Avatar

John G. Fletcher

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Samuel S. Coleman

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leana Golubchik

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donald W. Davies

National Physical Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge