Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William C. Carter is active.

Publication


Featured researches published by William C. Carter.


Proceedings of the 1969 24th national conference on | 1969

Reliability modeling techniques for self-repairing computer systems

Willard G. Bouricius; William C. Carter; Peter R. Schneider

This paper develops techniques for generating and using mathematical models applicable to architectural evaluation of the tradeoffs involved in designing self-repairing highly reliable computers for long missions. These systems must use standby sparing and their reliability is shown to be extremely sensitive to small variations in a new design parameter, the coverage, c, defined as the probability of system recovery given the existence of a failure. Interactive terminal calculations show c to be the single most important parameter in high-reliability system design. Changing the coverage from 1 to .98 can result in orders of magnitude change in system mission time with a specified reliability. Most techniques for increasing system reliability (e.g. adding more spares) are shown to be futile in the face of an inadequate .99 coverage. Adding checking, diagnostics, etc. to improve failure coverage is shown to be the most advantageous technique by examples of system tradeoff evaluation. This mandates extensive application of modeling techniques throughout all computer system design phases.


IEEE Transactions on Computers | 1971

Reliability Modeling for Fault-Tolerant Computers

Willard G. Bouricius; William C. Carter; Donald C. Jessep; Peter R. Schneider; Aspi B. Wadia

Reliability modeling and the mathematical equations involved are discussed for general computer systems organized to be fault tolerant. This paper summarizes the work done over the last four years on mathematical reliability modeling by the authors.


design automation conference | 1979

Symbolic Simulation for Correct Machine Design

William C. Carter; William H. Joyner; Daniel Brand

Program verification techniques which manipulate symbolic rather than actual values have been used successfully to find errors in implementations of computer designs. This paper describes symbolic simulation, a method similar to symbolic execution of programs, and its use in proving the correctness of machine architectures implemented in microcode. The procedure requires formal descriptions of machines at both the architectural and register transfer levels, but has been used to detect errors in implementation which often elude the standard test case approach.


Ibm Journal of Research and Development | 1981

Reliability, availability, and serviceability of IBM computer systems: a quarter century of progress

M. Y. Hsiao; William C. Carter; James Thomas; William R. Stringfellow

Computer systems have achieved significant progress in the areas of technology, performance, capability, and RAS (reliability/availability/serviceability) during the last quarter century. In this papers, we shall review the advances of IBM computer systems in the RAS area. This progress has for the most part been evolutionary; however, in some cases it has been revolutionary. RAS developments have been driven primarily by technological advances and by increases in functional capability and complexity, but RAS considerations have also played a leading role and have improved technological and functional capability. The paper briefly reviews the progress of computer technology. It points out how IBM has maintained or improved its systems RAS capabilities in the face of the greatly increased number of components and system complexity by improved system recovery and serviceability capability, as well as by basic improvements in intrinsic component failure rate. The paper also covers the CPU, tape, and disk areas and shows how RAS improvements in these areas have been significant. The main objective is to provide a comprehensive view of significant developments in the RAS characteristics of IBM computer systems over the past twenty-five years.


IEEE Computer | 1971

A Survey of Fault Tolerant Computer Architecture and its Evaluation

William C. Carter; Willard G. Bouricius

In striving to design highly reliable, highly available computers, two basic strategies have been employed: increasing the reliability throug-h advances in component technology; and designing self-repairing computers which use functional redundancy to permit correct performance (perhaps in a degraded manner) in the presence of component failures. Time has shown a fluctuation in the popularity of each strategy, based primarily on changes in technologies, applications and costs.


IEEE Transactions on Computers | 1971

Logic Design for Dynamic and Interactive Recovery

William C. Carter; Donald C. Jessep; Aspi B. Wadia; Peter R. Schneider; Willard G. Bouricius

Recovery in a fault-tolerant computer means the continuation of system operation with data integrity after an error occurs. This paper delineates two parallel concepts embodied in the hardware and software functions required for recovery; detection, diagnosis, and reconfiguration for hardware, data integrity, checkpointing, and restart for the software. The hardware relies on the recovery variable set, checking circuits, and diagnostics, and the software relies on the recovery information set, audit, and reconstruct routines, to characterize the system state and assist in recovery when required. Of particular utility is a handware unit, the recovery control unit, which serves as an interface between error detection and software recovery programs in the supervisor and provides dynamic interactive recovery.


international symposium on microarchitecture | 1976

Automated proofs of microprogram correctness

William H. Joyner; William C. Carter; George B. Leeman

This paper presents a method for verifying microprograms with computer aid, and examples of its application to actual systems. The specifications for an architecture and those for the computer on which it is to be implemented are both described formally, with the microcode supplied as data to the low level description. A correspondence between the two descriptions is then formalized, and a system of programs is used to prove mathematically that the correspondence holds. This interactive, goal-directed system not only provides a proof that microcode performs as specified, but more often aids in detecting and correcting microprogram errors. Several errors in actual implementations, some of which were difficult to detect using test cases, have been discovered in this way.


IEEE Transactions on Computers | 1971

A Simple Self-Testing Decoder Checking Circuit

William C. Carter; Keith A. Duke; Donald C. Jessep

A method is given of designing a simple self-checking and self-testing decoder checking circuit which uses much less circuitry than previous checkers. An analysis is given of the circuit savings and of the probability of erroneous decoder operation before error detection.


IEEE Transactions on Computers | 1973

Fault-Tolerant Computing: An Introduction and a Viewpoint

William C. Carter

AFTER approximately 20 years of obscurity, the field of fault-tolerant computing was revived by the formation of the IEEE Technical Committee on Fault-Tolerant Computing, by a series of articles in COMPUTER for January/ February 1971, by the 1971 International Symposium on Fault-Tolerant Computing in Pasadena, Calif., and by an IEEE TRANSACTIONS ON COMPUTERS Special Issue on Fault-Tolerant Computing in November 1971. Interest and activity continued apace, and the 1972 International Symposium on Fault-Tolerant Computing was held in Newton, Mass. Most of the excellent papers in this second IEEE TRANSACTIONS Special Issue on Fault-Tolerant Computing were presented at that symposium. As an introduction to these papers, consideration of the ultimate goals of this discipline and the universe in which our work is being done is most appropriate.


Ibm Journal of Research and Development | 1984

Implementation and evaluation of a (b,k)-adjacent error-correcting/detecting scheme for supercomputer systems

Jean Arlat; William C. Carter

This paper describes a coding scheme developed for a specific supercomputer architecture and structure. The code considered is a shortened (b,k)-adjacent single-error-correcting double-error probabilistic-detecting code with b = 5, k = 1, and code group width = 4. An evaluation of the probabilistic double-error-detection capability of the code was performed for drfferent organizations of the coding/decoding strategies for the codewords. This led to the selection of a system organization encompassing the traditional feature of memory data error protection and also providing for the detection of major addressing errors that may result from faults affecting the interconnection network communication modules. The cost of implementation is a limited amount of extra hardware and a negligible degradation in the double-error-detection properties of the code.

Collaboration


Dive into the William C. Carter's collaboration.

Researchain Logo
Decentralizing Knowledge