Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Garth R. Goodson is active.

Publication


Featured researches published by Garth R. Goodson.


measurement and modeling of computer systems | 2007

An analysis of latent sector errors in disk drives

Lakshmi N. Bairavasundaram; Garth R. Goodson; Shankar Pasupathy; Jiri Schindler

The reliability measures in todays disk drive-based storage systems focus predominantly on protecting against complete disk failures. Previous disk reliability studies have analyzed empirical data in an attempt to better understand and predict disk failure rates. Yet, very little is known about the incidence of latent sector errors i.e., errors that go undetected until the corresponding disk sectors are accessed. Our study analyzes data collected from production storage systems over 32 months across 1.53 million disks (both nearline and enterprise class). We analyze factors that impact latent sector errors, observe trends, and explore their implications on the design of reliability mechanisms in storage systems. To the best of our knowledge, this is the first study of such large scale our sample size is at least anorder of magnitude larger than previously published studies and the first one to focus specifically on latent sector errors and their implications on the design and reliability of storage systems.


symposium on operating systems principles | 2005

Fault-scalable Byzantine fault-tolerant services

Michael Abd-El-Malek; Gregory R. Ganger; Garth R. Goodson; Michael K. Reiter; Jay J. Wylie

A fault-scalable service can be configured to tolerate increasing numbers of faults without significant decreases in performance. The Query/Update (Q/U) protocol is a new tool that enables construction of fault-scalable Byzantine fault-tolerant services. The optimistic quorum-based nature of the Q/U protocol allows it to provide better throughput and fault-scalability than replicated state machines using agreement-based protocols. A prototype service built using the Q/U protocol outperforms the same service built using a popular replicated state machine implementation at all system sizes in experiments that permit an optimistic execution. Moreover, the performance of the Q/U protocol decreases by only 36% as the number of Byzantine faults tolerated increases from one to five, whereas the performance of the replicated state machine decreases by 83%.


dependable systems and networks | 2004

Efficient Byzantine-tolerant erasure-coded storage

Garth R. Goodson; Jay J. Wylie; Gregory R. Ganger; Michael K. Reiter

This paper describes a decentralized consistency protocol for survivable storage that exploits local data versioning within each storage-node. Such versioning enables the protocol to efficiently provide linearizability and wait-freedom of read and write operations to erasure-coded data in asynchronous environments with Byzantine failures of clients and servers. By exploiting versioning storage-nodes, the protocol shifts most work to clients and allows highly optimistic operation: reads occur in a single round-trip unless clients observe concurrency or write failures. Measurements of a storage system prototype using this protocol show that it scales well with the number of failures tolerated, and its performance compares favorably with an efficient implementation of Byzantine-tolerant state machine replication.


darpa information survivability conference and exposition | 2001

Survivable storage systems

Gregory R. Ganger; Pradeep K. Khosla; Mehmet Bakkaloglu; Michael W. Bigrigg; Garth R. Goodson; Semih Oguz; Vijay Pandurangan; Craig A. N. Soules; John D. Strunk; Jay J. Wylie

Survivable storage systems must maintain data and access to it in the face of malicious and accidental problems with storage servers, interconnection networks, client systems and user accounts. These four component types can be grouped into two classes: server-side problems and client-side problems. The PASIS architecture addresses server-side problems, including the connections to those servers, by encoding data with threshold schemes and distributing trust amongst sets of storage servers. Self-securing storage addresses client and user account problems by transparently auditing accesses and versioning data within each storage server. Thus, PASIS clients use threshold schemes to protect themselves from compromised servers, and self-securing servers use full access auditing to protect their data from compromised clients. Together, these techniques can provide truly survivable storage systems.


symposium on operating systems principles | 2011

Design implications for enterprise storage systems via multi-dimensional trace analysis

Yanpei Chen; Kiran Srinivasan; Garth R. Goodson; Randy H. Katz

Enterprise storage systems are facing enormous challenges due to increasing growth and heterogeneity of the data stored. Designing future storage systems requires comprehensive insights that existing trace analysis methods are ill-equipped to supply. In this paper, we seek to provide such insights by using a new methodology that leverages an objective, multi-dimensional statistical technique to extract data access patterns from network storage system traces. We apply our method on two large-scale real-world production network storage system traces to obtain comprehensive access patterns and design insights at user, application, file, and directory levels. We derive simple, easily implementable, threshold-based design optimizations that enable efficient data placement and capacity optimization strategies for servers, consolidation policies for clients, and improved caching performance for both.


symposium on reliable distributed systems | 2005

Lazy verification in fault-tolerant distributed storage systems

Michael Abd-El-Malek; Gregory R. Ganger; Garth R. Goodson; Michael K. Reiter; Jay J. Wylie

Verification of write operations is a crucial component of Byzantine fault-tolerant consistency protocols for storage. Lazy verification shifts this work out of the critical path of client operations. This shift enables the system to amortize verification effort over multiple operations, to perform verification during otherwise idle time, and to have only a subset of storage-nodes perform verification. This paper introduces lazy verification and describes implementation techniques for exploiting its potential. Measurements of lazy verification in a Byzantine fault-tolerant distributed storage system show that the cost of verification can be hidden from both the client read and write operation in workloads with idle periods. Furthermore, in workloads without idle periods, lazy verification amortizes the cost of verification over many versions and so provides a factor of four higher write bandwidth when compared to performing verification during each write operation.


Foundations of Intrusion Tolerant Systems, 2003 [Organically Assured and Survivable Information Systems] | 2003

Self-securing storage: protecting data in compromised systems

John D. Strunk; Garth R. Goodson; Michael L. Scheinholtz; Craig A. N. Soules; Gregory R. Ganger

Self-securing storage prevents intruders from undetectably tampering with or permanently deleting stored data. To accomplish this, self-securing storage devices internally audit all requests and keep old versions of data for a window of time, regardless of the commands received from potentially compromised host operating systems. Within the window, system administrators have this valuable information for intrusion diagnosis and recovery. Our implementation, called S4, combines log-structuring with journal-based metadata to minimize the performance costs of comprehensive versioning. Experiments show that self-securing storage devices can deliver performance that is comparable with conventional storage systems. In addition, analyses indicate that several weeks worth of all versions can reasonably be kept on state-of-the-art disks, especially when differencing and compression technologies are employed.


ACM Queue | 2007

Standardizing Storage Clusters

Garth R. Goodson; Sai Susharla; Rahul Iyer

Data-intensive applications such as data mining, movie animation, oil and gas exploration, and weather modeling generate and process huge amounts of data. File-data access throughput is critical for good performance. To scale well, these HPC (high-performance computing) applications distribute their computation among numerous client machines. HPC clusters can range from hundreds to thousands of clients with aggregate I/O demands ranging into the tens of gigabytes per second.


symposium on operating systems principles | 2005

Making enterprise storage more search-friendly

Shankar Pasupathy; Garth R. Goodson; Vijayan Prabhakaran

The focus of this work is to determine how to enhance storage systems to make search and indexing faster and better able to produce relevant answers. Enterprise search engines often run in appliances that must access the file system through standard network file system protocols (NFS, CIFS). As such, they are not able to take advantage of features that may be offered by the storage system. This work explores the types of APIs that a storage system can expose to a search engine to better enable it to do its job. We make the case that by exposing certain information we can make search faster and more relevant.


operating systems design and implementation | 2000

Self-securing storage: protecting data in compromised system

John D. Strunk; Garth R. Goodson; Michael L. Scheinholtz; Craig A. N. Soules; Gregory R. Ganger

Collaboration


Dive into the Garth R. Goodson's collaboration.

Top Co-Authors

Avatar

Gregory R. Ganger

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jay J. Wylie

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Michael K. Reiter

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

John D. Strunk

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge