Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Victor F. Nicola is active.

Publication


Featured researches published by Victor F. Nicola.


IEEE Transactions on Computers | 1992

A unified framework for simulating Markovian models of highly dependable systems

Ambuj Goyal; Perwez Shahabuddin; Philip Heidelberger; Victor F. Nicola; Peter W. Glynn

The authors present a unified framework for simulating Markovian models of highly dependable systems. It is shown that a variance reduction technique called importance sampling can be used to speed up the simulation by many orders of magnitude over standard simulation. This technique can be combined very effectively with regenerative simulation to estimate measures such as steady-state availability and mean time to failure. Moveover, it can be combined with conditional Monte Carlo methods to quickly estimate transient measures such as reliability, expected interval availability, and the distribution of interval availability. The authors show the effectiveness of these methods by using them to simulate large dependability models. They discuss how these methods can be implemented in a software package to compute both transient and steady-state measures simultaneously from the same sample run. >


IEEE Transactions on Software Engineering | 1990

Modeling of correlated failures and community error recovery in multiversion software

Victor F. Nicola; Ambuj Goyal

Three aspects of the modeling of multiversion software are considered. First, the beta-binomial distribution is proposed for modeling correlated failures in multiversion software. Second, a combinatorial model for predicting the reliability of a multiversion software configuration is presented. This model can take as inputs failure distributions either from measurements or from a selected distribution (e.g. beta-binomial). Various recovery methods can be incorporated in this model. Third, the effectiveness of the community error recovery method based on checkpointing is investigated. This method appears to be effective only when the failure behaviors of program versions are lightly correlated. Two different types of checkpoint failure are also considered: an omission failure where the correct output is recognized at a checkpoint but the checkpoint fails to correct the wrong outputs and a destructive failure where the good versions get corrupted at a checkpoint. >


measurement and modeling of computer systems | 1992

Analysis of the generalized clock buffer replacement scheme for database transaction processing

Victor F. Nicola; Asit Dan; Daniel M. Dias

The CLOCK algorithm is a popular buffer replacement algorithm because of its simplicity and its ability to approximate the performance of the Least Recently Used (LRU) replacement policy. The Generalized Clock (GCLOCK) buffer replacement policy uses a circular buffer and a weight associated with each page brought in buffer to decide on which page to replace. We develop an approximate analysis for the GCLOCK policy under the Independent Reference Model (IRM) that applies to many database transaction processing workloads. We validate the analysis for various workloads with data access skew. Comparison with simulations shows that in all cases examined the error is extremely small (less than 1%). To show the usefulness of the model we apply it to a Transaction Processing Council benchmark A (TPC-A) like workload. If knowledge of the different data partitions in this workload is assumed, the analysis shows that, with appropriate choice of weights, the performance of the GCLOCK algorithm can be better than the LRU policy. Performance very close to that for optimal (static) buffer allocation can be achieved by assigning sufficiently high weights, and can be implemented with a reasonably low overhead. Finally, we outline how the model can be extended to capture the effect of page invalidation in a multinode system.


Journal of Systems and Software | 1986

On modelling the performance and reliability of multimode computer systems

Vidyadhar G. Kulkarni; Victor F. Nicola; Kishor S. Trivedi

We present an effective technique for the combined performance and reliability analysis of multimode computer systems. A reward rate (or a performance level) is associated with each mode of operation. The switching between different modes is characterized by a continuoustime Markov chain. Different types of service-interruption interactions (as a result of mode switching) are considered. We consider the execution time of a given job on such a system and derive the distribution of its completion time. A useful dual relationship, between the completion time of a given job and the accumulated reward up to a given time, is noted. We demonstrate the use of our technique by means of a simple example.


IEEE Transactions on Reliability | 2001

Techniques for fast simulation of models of highly dependable systems

Victor F. Nicola; Perwez Shahabuddin; Marvin K. Nakayama

With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable systems.


ACM Transactions on Modeling and Computer Simulation | 1994

Bounded relative error in estimating transient measures of highly dependable non-Markovian systems

Philip Heidelberger; Perwez Shahabuddin; Victor F. Nicola

This article deals with fast simulation techniques for estimating transient measures in highly dependable systems. The systems we consider of components with generally distributed lifetimes and repair times, with complex interaction among components. As is well known, standard simulation of highly dependable systems is very inefficient, and importance-sampling is widely used to improve efficiency. We present two new techniques, one of which is based on the uniformization approach to simulation, and the other is a natural extension of the uniformization approach which we call exponential transformation. We show that under certain assumptions, these techniques have the bounded relative error property, i.e., the relative error of the simulation estimate remains bounded as components become more and more reliable, unlike standard simulation in which it tends to infinity. This implies that only a fixed number of observations are required to achieve a given relative error, no matter how rare the failure events are.


IEEE Transactions on Software Engineering | 1990

Comparative analysis of different models of checkpointing and recovery

Victor F. Nicola; J. M. van Spanje

Different checkpointing strategies are combined with recovery models of different refinement levels in the database systems. The complexity of the resulting model increases with its accuracy in representing a realistic system. Three different analytic approaches are used depending on the complexity of the model: analytic, numerical and simulation. A Markovian queuing model is developed, resulting in a combined Poisson and load-dependent checkpointing strategy with stochastic recovery. A state-space analysis approach is used to derive semianalytic expressions for the performance variables in terms of a set of unknown boundary state probabilities. An efficient numerical algorithm for evaluating unknown probabilities is outlined. The validity of the numerical solution is checked against simulation results and shown to be of acceptable accuracy, particularly in the stable operating range. Simulations have shown that realistic load-dependent checkpointing results in performance close to the optimal deterministic checkpointing. Furthermore, the stochastic recovery model is an accurate representation of a realistic recovery. >


IEEE Transactions on Software Engineering | 1987

Queueing Analysis of Fault-Tolerant Computer Systems

Victor F. Nicola; Vidyadhar G. Kulkarni; Kishor S. Trivedi

In this paper we consider the queueing analysis of a fault-tolerant computer system. The failure/repair behavior of the server is modeled by an irreducible continuous-time Markov chain. Jobs arrive in a Poisson fashion to the system and are serviced according to FCFS discipline. A failure may cause the loss of the work already done on the job in service, if any; in this case the interrupted job is repeated as soon as the server is ready to deliver service. In addition to the delays due to failures and repairs, jobs suffer delays due to queueing. We present an exact queueing analysig of the system and study the steady-state behavior of the number of jobs in the system. As a numerical example, we consider a system with two processors subject to failures and repairs.


IEEE Transactions on Computers | 1993

Fast simulation of highly dependable systems with general failure and repair processes

Victor F. Nicola; Marvin K. Nakayama; Philip Heidelberger; Ambuj Goyal

An approach for simulating models of highly dependable systems with general failure and repair time distribution is described. The approach combines importance sampling with event rescheduling in order to obtain variance reductions in such rare event simulations. The approach is general in nature and allows a variety of features commonly arising in dependability modeling to be simulated effectively. It is shown how the technique can be applied to systems with redundant components and/or periodic maintenance. For different failure time distributions, the effect of the maintenance period on the steady-state availability is explored. The amount of component redundancy needed to achieve a certain reliability level is determined. >


[1990] Digest of Papers. Fault-Tolerant Computing: 20th International Symposium | 1990

Fast simulation of dependability models with general failure, repair and maintenance processes

Victor F. Nicola; Marvin K. Nakayama; Philip Heidelberger; Ambuj Goyal

An approach to simulating models of highly dependable systems with general failure and repair time distributions is described. The approach combines importance sampling with event rescheduling in order to obtain variance reduction in such rare event simulations. The approach is general in nature and allows effective simulation of a variety of features commonly arising in dependability modeling. For example, it is shown how the technique can be applied to systems with periodic maintenance. The effects on the steady-state availability of the maintenance period and of different failure time distributions are explored. Some of the trade-offs involved in the design of specific rescheduling rules are described, and their potential effectiveness in simulations of systems with nonexponential failure and repair time distributions are demonstrated. It is found that an effective method for selecting the rescheduling distribution is to keep the probability of a failure transition in the range between 0.1 and 0.5.<<ETX>>

Collaboration


Dive into the Victor F. Nicola's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vidyadhar G. Kulkarni

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Dirk P. Kroese

University of Queensland

View shared research outputs
Researchain Logo
Decentralizing Knowledge