Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John F. Meyer is active.

Publication


Featured researches published by John F. Meyer.


formal methods | 2002

Stochastic activity networks: formal definitions and concepts

William H. Sanders; John F. Meyer

Stochastic activity networks have been used since the mid-1980s for performance, dependability, and performability evaluation. They have been used as a modeling formalism in three modeling tools (METASAN, UltraSAN, and Mobius), and have been used to evaluate a wide range of systems. This chapter provides the formal definitions and basic concepts associated with SANs, explaining their behavior and their execution policy precisely.


IEEE Journal on Selected Areas in Communications | 1991

Reduced base model construction methods for stochastic activity networks

William H. Sanders; John F. Meyer

Reduced base model construction methods for stochastic activity networks are discussed. The basic definitions concerning stochastic networks are reviewed and the types of variables used in the construction process are defined. These variables can be used to estimate both transient and steady-state system characteristics. The construction operations used and theorems stating the validity of the method are presented. A procedure for generating the reduced base model stochastic process for a given stochastic activity network and performance variable is presented. Some examples which illustrate the method and demonstrate its effectiveness in reducing the size of a state space are presented. >


Archive | 1991

A Unified Approach for Specifying Measures of Performance, Dependability and Performability

William H. Sanders; John F. Meyer

Methods for evaluating system performance, dependability, and performability are becoming increasingly more important, particularly in the case of critical applications. Central to the evaluation process is the definition of specific measures of system behavior that are of interest to a user. This paper presents a unified approach to the specification of measures of performance, dependability, and performability. The unification is achieved by 1) using a model class well suited for representation of all three aspects of system behavior, and 2) defining a variable class which allows for the specification of a wide range of measures of system behavior. The resulting approach permits the specification of many non-traditional as well as traditional measures of system performance, dependability, and performability in a unified manner. Example instantiations of variables within this class are given and their relationships to variables used in traditional performance and dependability evaluations are illustrated.


IEEE Transactions on Reliability | 1993

Performability enhancement of fault-tolerant software

Ann T. Tai; John F. Meyer; A. Avizienis

Model-based performability evaluation is used to assess and improve the effectiveness of fault-tolerant software. The evaluation employs a measure that combines quantifications of performance and dependability in a synergistic manner, thus capturing the interaction between these two important attributes. The specific systems evaluated are a basic realization of N-version programming (NVP) (N=3) along with variants thereof. For each system, its corresponding stochastic process model is constructed in two layers, with performance and dependability submodels residing in the lower layer. The evaluation results reveal the extent to which performance, dependability, and performability of a variant are improved relative to the basic NVP system. More generally, the investigation demonstrates that such evaluations are indeed feasible and useful with regard to enhancing software performability. >


symposium on reliable distributed systems | 2004

Model-based validation of an intrusion-tolerant information system

Fabrice Stevens; Tod Courtney; Sankalp Singh; Adnan Agbaria; John F. Meyer; William H. Sanders; Partha P. Pal

An increasing number of computer systems are designed to be distributed across both local and wide-area networks, performing a multitude of critical information-sharing and computational tasks. Malicious attacks on such systems are a growing concern, where attackers typically seek to degrade quality of service by intrusions that exploit vulnerabilities in networks, operating systems, and application software. Accordingly, designers are seeking improved techniques for validating such systems with respect to specified survivability requirements. In this regard, we describe a model-based validation effort that was undertaken as part of a unified approach to validating a networked intrusion-tolerant information system. Model-based results were used to guide the systems design as well as to determine whether a given survivability requirement was satisfied.


ieee international symposium on fault tolerant computing | 1988

Analysis of workload influence on dependability

John F. Meyer; Lu Wei

The authors consider a general, analytic approach to the study of workload effects on computer system dependability, where the faults considered are transient and the dependability measure in question is the time to failure, T/sub f/. Under these conditions, workload plays two roles with opposing effects: it can help detect/correct a correctable fault, or it can cause the system to fail by activating an uncorrectable fault. As a consequence, the overall influence of workload on T/sub f/ is difficult to evaluate intuitively. To examine this in more formal terms, the authors establish a Markov renewal process model that represents the interaction among workload and fault accumulation ins systems for which fault tolerance can be characterized by fault margins. Using this model, they consider some specific examples and show how the probabilistic nature of T/sub f/ can be formulated directly in terms of parameters regarding workload, fault arrivals, and fault margins.<<ETX>>


Proceedings of 1995 IEEE International Computer Performance and Dependability Symposium | 1995

Performability evaluation: where it is and what lies ahead

John F. Meyer

The concept of performability emerged from a need to assess a systems ability to perform when performance degrades as a consequence of faults. After almost 20 years of effort concerning its theory, techniques, and applications, performability evaluation is currently well understood by the many people responsible for its development. On the other hand, the utility of combined performance-dependability measures has yet to be appreciably recognized by the designers of contemporary computer systems. Following a review of what performability means, we discuss its present state with respect to both scientific and engineering contributions. In view of current practice and the potential design applicability of performability evaluation, we then point to some advances that are called for if this potential is indeed to be realized.<<ETX>>


Computer Networks and Isdn Systems | 1993

Dimensioning of an ATM switch with shared buffer and threshold priority

John F. Meyer; Sergio Montagna; Roberto Paglino

A number of recent studies have addressed the use of priority mechanisms in Asynchronous Transfer Mode (ATM) switches. This investigation concerns the performance evaluation and dimensioning of a shared-buffer switching element with a threshold priority mechanism (partial buffer sharing). It assumes that incoming ATM ceils are distinguished by a space priority assignment, i.e., loss of a high priority cell should be less likely than loss of a low priority cell. The evaluation method is analytic, based on an approximate discrete-time, finite-state Markov model of a switch and its incoming traffic. The development focuses on the formulation of steady-state loss probabilities for each priority class. Evaluation of delay measures for each class is also supported by the model; results concerning the latter are illustrated without development. The analysis of loss probabilities is then used to dimension the buffer capacity and threshold level such that required maximum cell loss probabilities are just satisfied for each cell type. Moreover, when so dimensioned with respect to relatively stringent loss requirements, i.e., probabilities of 10 10 and 10 s for high and low priority cells, respectively, we find that both loss performance and resource utilization are appreciably improved over a comparable switch without such a mechanism.


ieee international symposium on fault tolerant computing | 1988

Fault-tolerant BIBD networks

B. E. Aupperle; John F. Meyer

The use of multiple buses can improve both the fault tolerance and performance of local area computer networks. Existing schemes either depend on active components for full connectivity or can experience decreased performance as many hosts attempt to access one bus. An architecture class based on balanced incomplete block designs (BIBDs) is proposed to address these problems. A BIBD architecture uses redundant communication channels and exhibits degradable performance as faults occur. The performability of such networks is evaluated, where evaluation is based on stochastic activity network models. The results obtained are provided for comparison of BIBD network performability with that of conventional multibus networks.<<ETX>>


Archive | 2011

Software Performability: From Concepts to Applications

Ann T. Tai; John F. Meyer; Algirdas Avizienis

Computers are currently used in a variety of critical applications, including systems for nuclear reactor control, flight control (both aircraft and spacecraft), and air traffic control. Moreover, experience has shown that the dependability of such systems is particularly sensitive to that of its software components, both the system software of the embedded computers and the application software they support. Software Performability: From Concepts to Applications addresses the construction and solution of analytic performability models for critical-application software. The book includes a review of general performability concepts along with notions which are peculiar to software performability. Since fault tolerance is widely recognized as a viable means for improving the dependability of computer system (beyond what can be achieved by fault prevention), the examples considered are fault-tolerant software systems that incorporate particular methods of design diversity and fault recovery. Software Performability: From Concepts to Applications will be of direct benefit to both practitioners and researchers in the area of performance and dependability evaluation, fault-tolerant computing, and dependable systems for critical applications. For practitioners, it supplies a basis for defining combined performance-dependability criteria (in the form of objective functions) that can be used to enhance the performability (performance/dependability) of existing software designs. For those with research interests in model-based evaluation, the book provides an analytic framework and a variety of performability modeling examples in an application context of recognized importance. The material contained in this book will both stimulate future research on related topics and, for teaching purposes, serve as a reference text in courses on computer system evaluation, fault-tolerant computing, and dependable high-performance computer systems.

Collaboration


Dive into the John F. Meyer's collaboration.

Top Co-Authors

Avatar

Ann T. Tai

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lu Wei

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Avizienis

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge