Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark Smotherman is active.

Publication


Featured researches published by Mark Smotherman.


Journal of Guidance Control and Dynamics | 1986

The Hybrid Automated Reliability Predictor

Joanne Bechta Dugan; Kishor S. Trivedi; Mark Smotherman; Robert Geist

In this paper, we present an overview of the hybrid automated reliability predictor (HARP), under development at Duke and Clemson Universities. The HARP approach to reliability prediction is characterized by a decomposition of the overall model into distinct fault-occurrence/repair and fault /error-handling submodels. The faultoccurrence/repair model can be cast as either a fault tree or as a Markov chain and is solved analytically. Both exponential and Weibull time to failure distributions are allowed. There are a variety of choices available for the specification of the fault/error-handling behavior that may be solved analytically or simulated. Both graphical and textual interfaces are provided to HARP.


IEEE Transactions on Reliability | 1989

A non-homogeneous Markov model for phased-mission reliability analysis

Mark Smotherman; Kay Zemoudeh

Three assumptions of Markov modeling for reliability of phased-mission systems that limit flexibility of representation are identified. The proposed generalization has the ability to represent state-dependent behavior, handle phases of random duration using globally time-dependent distributions of phase change time, and model globally time-dependent failure and repair rates. The approach is based on a single nonhomogeneous Markov model in which the concept of state transition is extended to include globally time-dependent phase changes. Phase change times are specified using nonoverlapping distributions with probability distribution functions that are zero outside assigned time intervals; the time intervals are ordered according to the phases. A comparison between a numerical solution of the model and simulation demonstrates that the numerical solution can be several times faster than simulation. >


international symposium on microarchitecture | 1994

A fill-unit approach to multiple instruction issue

Manoj Franklin; Mark Smotherman

Multiple issue of instructions occurs in superscalar and VLIW machines. The paper investigates a third type of machine design, which combines the advantages of code compatibility as in superscalars and the absence of complex dependency-checking logic from the decoder as in VLIW. In this design, a stream of scalar instructions is executed by the hardware and is simultaneously compacted into VLIW-type instructions, which are then stored in a structure called a shadow cache. When a shadow cache line contains the instructions requested by the fetch unit, the scalar instruction stream is preempted and all operations in the shadow cache line are simultaneously issued and executed. The mechanism that compacts instructions is called a fill unit, and was first proposed for dynamically compacting macrooperations into large executable units by Melvin, Shebanow, and Patt in 1988. We have extended their approach to directly handle data dependencies, delayed branches, and speculative execution (using branch prediction). This approach is evaluated using the MIPS architecture, and a six-functional-unit machine is found to be 52 to 108% faster than a single-issue processor for unrecompiled SPECint92 benchmarks.


Computers & Electrical Engineering | 1984

Hybrid reliability modeling of fault-tolerant computer systems

Kishor S. Trivedi; Joanne Bechta Dugan; Robert Geist; Mark Smotherman

Abstract Current technology allows sufficient redundancy in fault-tolerant computer systems to insure that the failure probability due to exhaustion of spares is low. Consequently, the major cause of failure is the inability to correctly detect, isolate, and reconfigure when faults are present. Reliability estimation tools must be flexible enough to accurately model this critical fault-handling behavior and yet remain computationally tractable. This paper discusses reliability modeling techniques based on a behavioral decomposition that provides tractability by separating the reliability model along temporal lines into nearly disjoint fault-occurrence and fault-handling submodels. An Extended Stochastic Petri Net (ESPN) model provides the needed flexibility for representing the fault-handling behavior, while a nonhomogeneous Markov chain accounts for the possibly non-Poisson fault-occurrence behavior. Since the submodels are separate, the ESPN submodel, in which all time constants are of the same order of magnitude, can be simulated. The nonhomogeneous Markov chain is solved analytically, and the result is a hybrid model. The method of coverage factors, used to combine the submodels, is generalized to more accurately reflect the fault-handling effectiveness within the fault-occurrence model. However, due to approximations made in the aggregation of the two submodels and inaccurate estimation of component failure rates and other model parameters, errors can still arise in the subsequent reliability predictions. The accuracy of the model predictions is evaluated analytically, and error bounds on the system reliability are produced. These modeling techniques have been implemented in the HARP (Hybrid Automated Reliability Predictor) program.


reliability and maintainability symposium | 1989

Ultrahigh reliability estimates through simulation

Robert Geist; Mark Smotherman

A statistical variance reduction technique called importance sampling is described, and its effectiveness in estimating ultrahigh reliability of life-critical electronics systems is compared with that of the widely used HARP and SURE analytic tools. Importance sampling is seen to provide more accurate reliability estimates with relatively little computational expense for the models studied. The technique is also seen to provide a convenient method for handling globally time-dependent failure processes and uncertainty in model parameter values. Extreme sensitivity of the importance sample algorithm to its bias parameters is illustrated, and a novel technique for selection of these parameters is proposed.<<ETX>>


international symposium on microarchitecture | 1991

Efficient DAG construction and heuristic calculation for instruction scheduling

Mark Smotherman; Sanjay Krishnamurthy; P. S. Aravind; David Hunnicutt

A number of heuristic algorithms for DAG-based instruction scheduling have been proposed over the past few years. In this paper, we explore the efficiency of three DAG construction algorithms and survey 26 proposed heuristics and their methods of calculation. Six scheduling algorithms are analyzed in terms of DAG construction and heuristic use. DAG suuctural statistics and scheduling times for the three construction algorithms are given for several popular benchmarks. The tablebuilding algorithms are shown to extremely efficient for programs with large basic blocks and yet appropriately handle the problem of retaining important transitive arcs. The node revisitation overhead of intermediate heuristic calculation steps is also investigated and shown to be negligible.


international symposium on microarchitecture | 1993

Instruction scheduling for the Motorola 88110

Mark Smotherman; Shuchi Chawla; Stan Cox; Brian A. Malloy

Static instruction scheduling is an important optimization to exploit instruction level parallelism. If the scheduler has to consider resource constraints to prevent structural hazards, usually the processor timing is simulated by overlaying binary matrices representing the resource usage of instructions. This technique is rather time consuming. It is shown that the timing can be simulated by a deterministic finite automaton and the matrix operations for a simulation step are replaced by two table lookups. A prototype implementation shows that about an eighteenfold speedup of the simulation can be expected. This performance gain can be used either to speed up existing scheduling algorithms or to use more complex algorithms to improve scheduling results.<<ETX>>


Acta Informatica | 1986

The reliability of life-critical computer systems

Robert Geist; Mark Smotherman; Kishor S. Trivedi; J Bechta Dugan

SummaryIn order to aid the designers of life-critical, fault-tolerant computing systems, accurate and efficient methods for reliability prediction are needed. The accuracy requirement implies the need to model the system in great detail, and hence the need to address the problems of large state space, non-exponential distributions, and error analysis. The efficiency requirement implies the need for new model solution techniques, in particular the use of decomposition/aggregation in the context of a hybrid model. We describe a model for reliability prediction which meets both requirements. Specifically, our model is partitioned into fault occurrence and fault/error handling submodels, which are represented by non-homogeneous Markov processes and extended stochastic Petri nets, respectively. The overall aggregated model is a stochastic process that is solved by numerical techniques. Methods to analyze the effects of variations in input parameters on the resulting reliability predictions are also provided.


Reliability Engineering & System Safety | 1990

Phased mission effectiveness using a nonhomogeneous Markov reward model

Mark Smotherman; Robert Geist

Abstract The requirements for industrial systems often include the dependable performance of a sequentially-dependent set of tasks, each of which may require different component loadings and configurations. The performance evaluation of such systems is termed phased mission analysis. A new approach to this analysis is presented that is based on a nonhomogeneous Markov reward model in which the concept of a state transition is generalized to include phase changes as well as failures and repairs. This model allows time-dependent failure rates for those phases without repair, and incorporates cumulative reward measures to provide figures of merit for work performed.


[1990] Digest of Papers. Fault-Tolerant Computing: 20th International Symposium | 1990

Modeling recovery time distributions in ultrareliable fault-tolerant systems

Robert Geist; Mark Smotherman; Ronald Talley

A technique for fitting distributions to empirical recovery time data that focuses on the components that dominate system reliability is proposed. The technique uses Goldfarbs conjugate gradient descent search to minimize the L/sup 2/ norm of the error projected in the Laplace transform domain. A new parametric family of distributions is also suggested and is seen to provide uniformly better predictions of system reliability than the standard distributions used for this purpose, i.e. gamma, Weibull, and log normal. Applications to several sets of real recovery time data are provided.<<ETX>>

Collaboration


Dive into the Mark Smotherman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge