Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bruce McNutt is active.

Publication


Featured researches published by Bruce McNutt.


IEEE Transactions on Power Systems | 1988

The mixture of normals approximation technique for equivalent load duration curves

George Gross; Nancy V. Garapic; Bruce McNutt

A novel approximation technique based on mixture-of-normals distributions is presented. The mixtures of normal approximation (MONA) for equivalent load duration curves (ELDCs) proceeds in three steps. First, the system load random variable is approximated by a mixture-of-normals distribution. Next, the approximation of the outage random variable of a group of one or more units by the distribution is obtained. These two approximations are combined to derive the MONA of each ELDC. The construction of the MONA for the system load random variable can be interpreted in terms of partitioning the load into various categories based on the load magnitudes. A salient feature of the MONA technique is a simple recursive formula for convolving (rolling in) and deconvolving (rolling out) the contribution of each generating block. The performance of the MONA technique is analyzed in terms of its ability to fit the original load duration curve, its ability to fit the ELDCs, its accuracy, and its robustness. >


Ibm Journal of Research and Development | 1994

Background data movement in a log-structured disk subsystem

Bruce McNutt

The log-structured disk subsystem is a new concept for the use of disk storage whose future application has enormous potential. In such a subsystem, all writes are organized into a log, each entry of which is placed into the next available free storage. A directory indicates the physical location of each logical object (e.g., each file block or track image) as known to the processor originating the I/O request. For those objects that have been written more than once, the directory retains the location of the most recent copy. Other work with log-structured disk subsystems has shown that they are capable of high write throughputs. However, the fragmentation of free storage due to the scattered locations of data that become out of date can become a problem in sustained operation. To control fragmentation, it is necessary to perform ongoing garbage collection, in which the location of stored data is shifted to release unused storage for re-use. This paper introduces a mathematical model of garbage collection, and shows how collection load relates to the utilization of storage and the amount of locality present in the pattern of updates. A realistic statistical model of updates, based upon trace data analysis, is applied. In addition, alternative policies are examined for determining which data areas to collect. The key conclusion of our analysis is that in environments with the scattered update patterns typical of database I/O, the utilization of storage must be controlled in order to achieve the high write throughput of which the subsystem is capable. In addition, the presence of data locality makes it important to take the past history of data into account in determining the next area of storage to be garbage-collected.


Ibm Systems Journal | 1993

I/O subsystem configurations for ESA: new roles for processor storage

Bruce McNutt

I/O subsystem configurations are dictated by the storage and I/O requirements of the specific applications that use the disk hardware. Treating the latter requirement as a given, however, draws a boundary at the channel interface that is not well-suited to the capabilities of the Enterprise Systems Architecture (ESA). This architecture allows hardware expenditures in the I/O subsystem to be managed, while at the same time improving transaction response time and system throughput capability, by a strategy of processor buffering coupled with storage control cache. The key is to control the aggregate time per transaction spent waiting for physical disk motion. This paper investigates how to think about and accomplish such an objective. A case study, based on data collected at a large Multiple Virtual Storage installation, is used to investigate the potential types and amounts of memory use by individual files, both in storage control cache and in processor buffers. The mechanism of interaction between the two memory types is then examined and modeled so as to develop broad guidelines for how best to deploy an overall memory budget. These guidelines tend to contradict the usual metrics of storage control cache effectiveness, underscoring the need for an adjustment in pre-ESA paradigms.


IEEE Power & Energy Magazine | 1984

A Technique For Approximating The Capacity Outage Table Based On The Modeling Of Unit Outage Size

Bruce McNutt

The capacity outage table of a power generation system is used to obtain, for any level of outage, the probability that the total system outage capacity exceeds that level. This paper proposes a novel approach to the problem of approximating the function represented by the capacity outage table. The proposed approach uses an underlying probabilistic model of unit outage sizes to predict the general form of the function. We also show the application of the approach to an important class of power systems, for which the method is used to develop a powerful technique for interpolating the capacity outage curve between two widely spaced points.


Ibm Journal of Research and Development | 1996

Storage control cache resource management: increasing diversity, increasing effectiveness

David Alan Burton; Bruce McNutt

Efficient management of cached storage control resources has been important since the introduction of cached controllers in the early 1980s, and it continues to grow more important as technology advances. The need for cache resource management is due to the diversity of workloads that may coexist under a given controller. Some workloads may continually require the staging of new data into cache memory, with almost no benefit in terms of performance; other workloads may reap major performance benefits while requiring relatively little data staging. The sharing of resources among various workloads must therefore be controlled to ensure that workloads in the former group do not interfere too much with those in the latter. Management of cache functions is often viewed as the job of the host system to which the controller is attached. But it is now also possible for advanced controllers to perform such management functions in a stand-alone manner. Caching algorithms can change adaptively to match the workloads presented. This enables the controller to be ported across multiple plafforms without dependencies on software support. This paper surveys the variety of techniques that have been used for cache resource control, and examines the rapid evolution in such techniques that is now occurring.


Archive | 1993

Method and means for dynamic cache management by variable space and time binding and rebinding of cache extents to DASD cylinders

John G. Aschoff; Jeffrey A. Berger; David Alan Burton; Bruce McNutt; Stanley C. Kurtz


Archive | 2002

Cache management system with multiple cache lists employing roving removal and priority-based addition of cache entries

Thomas Charles Jarvis; Steven Robert Lowe; Bruce McNutt


Archive | 1994

Method and apparatus for dynamic cache memory allocation via single-reference residency times

Bruce McNutt; Brian Smith


Archive | 2007

Managing write requests in cache directed to different storage groups

Binny S. Gill; Michael Thomas Benhase; Smith Hyde Ii Joseph; Thomas Charles Jarvis; Bruce McNutt; Dharmendra S. Modha


Archive | 1996

System and method for management of persistent data in a log-structured disk array

Bruce McNutt; Jaishankar Moothedath Menon; Kevin Frank Smith

Researchain Logo
Decentralizing Knowledge