Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alberto Avritzer is active.

Publication


Featured researches published by Alberto Avritzer.


IEEE Transactions on Software Engineering | 1995

The automatic generation of load test suites and the assessment of the resulting software

Alberto Avritzer; Elaine J. Weyuker

Three automatic test case generation algorithms intended to test the resource allocation mechanisms of telecommunications software systems are introduced. Although these techniques were specifically designed for testing telecommunications software, they can be used to generate test cases for any software system that is modelable by a Markov chain provided operational profile data can either be collected or estimated. These algorithms have been used successfully to perform load testing for several real industrial software systems. Experience generating test suites for five such systems is presented. Early experience with the algorithms indicate that they are highly effective at detecting subtle faults that would have been likely to be missed if load testing had been done in the more traditional way, using hand-crafted test cases. A domain-based reliability measure is applied to systems after the load testing algorithms have been used to generate test data. Data are presented for the same five industrial telecommunications systems in order to track the reliability as a function of the degree of system degradation experienced. >


Empirical Software Engineering | 1997

Monitoring Smoothly Degrading Systems for Increased Dependability

Alberto Avritzer; Elaine J. Weyuker

A strategy is presented for determining when it is advantageous to take some action to restore a system to full capacity. A determination is made of the types of data that need to be collected and circumstances under which the strategy is likely to be useful. Production traffic data is presented for a very large industrial telecommunications project, and the strategy is applied. An investigation is made of when the application of the strategy leads to increased system availability and decreased packet loss experienced by users.


workshop on software and performance | 2002

Software performance testing based on workload characterization

Alberto Avritzer; Joe Kondek; Danielle Liu; Elaine J. Weyuker

A major concern of most businesses is their ability to meet customers performance requirements. This paper describes our workload-based approach to performance testing, and includes a case study that demonstrates the application of this approach to a large, industrial software system. For this system, we collected data in the field to determine the current production usage, and then assessed the performance of the system under both current workloads, and those likely to be encountered in the future. This led to the identification of a software bottleneck, which, had it occurred in the field rather than in the test lab, would have likely had significant consequences.


international symposium on software testing and analysis | 1993

Load testing software using deterministic state testing

Alberto Avritzer; Brian Larson

In this paper we introduce a new load testing technique called Deterministic Markov State Testing and report on its application. Our approach is called “deterministic” because the sequence of test case execution is set at planning time, and “state testing” because each test case certifies a unique software state. There are four main advantages of Deterministic Markov State Testing for system testers: provision of precise software state information for root cause analysis in load test, accommodation for limitations of the system test lab configuration, higher acceleration ratios in system test, and simple management of distributed execution of test cases. System testers using the proposed method have great flexibility in dealing with common system test problems: limited access to the system test environment, unstable software, or changing operational conditions. Because each test case verifies correct execution on a path from the idle state to the software state under test, our method does not require the continuous execution of all test cases. Deterministic Markov State Testing is operational-profile-based, and allows for measurement of software reliability robustness when the operational profile changes.


international symposium on software testing and analysis | 1994

Generating test suites for software load testing

Alberto Avritzer; Elaine J. Weyuker

Three automatic test case generation algorithms intended to test the resource allocation of telecommunications software systems are introduced. Although these techniques were specifically designed for testing telecommunications software, they can be used to generate test cases for any software system that is modelable by a Markov chain. In addition, three new stochastic measures of effectiveness are presented to be used to assess these algorithms. Each of these measures is a variant of a mutation score, but incorporates the association of a probability y coefficient with each mutant, These algorithms have been used successfully to perform load testing for some real industrial software systems. We present empirical results for five such systems. Early experience with the algorithms indicate that they are highly effective at detecting subtle faults that would have been likely to be missed if load testing had been done in the more traditional way, using hand-crafted test cases.


IEEE Software | 1996

Reliability testing of rule-based systems

Alberto Avritzer; Johannes P. Ros; Elaine J. Weyuker

Rule-based software systems are becoming more common in industrial settings, particularly to monitor and control large, real-time systems. The authors describe an algorithm for reliability testing of rule-based systems and their experience using it to test an industrial network surveillance system.


Ibm Systems Journal | 2002

A metric for predicting the performance of an application under a growing workload

Elaine J. Weyuker; Alberto Avritzer

A new software metric, designed to predict the likelihood that the system will fail to meet its performance goals when the workload is scaled, is introduced. Known as the PNL (Performance Nonscalability Likelihood) metric, it is applied to a study of a large industrial system, and used to predict at what workloads bottlenecks are likely to appear when the presented workload is significantly increased. This allows for intelligent planning in order to minimize disruption of acceptable performance for customers. The case study also outlines our performance testing approach and presents the major steps required to identify current production usage and to assess the software performance under current and future workloads.


Empirical Software Engineering | 1999

Metrics to Assess the Likelihood of Project Success Basedon Architecture Reviews

Alberto Avritzer; Elaine J. Weyuker

Architecture audits are performed very early in the software development lifecycle, typically before low level design or code implementation has begun. An empirical study was performed to assess metrics developed to predict the likelihood of risk of failure of a project. The study used data collected during 50 architecture audits performed over a period of two years for large industrial telecommunications systems. The purpose of such a predictor was to identify at a very early stage, projects that were likely to be at high risk of failure. This would enable the project to take corrective action before significant resources had been expended using a problematic architecture. Detailed information about seven of the 50 projects is presented, and a discussion of how the proposed metric rated each of these projects is presented., A comparison is made of the metrics evaluation and the assessment of the project made by reviewers during the review process.


Software - Practice and Experience | 1996

Deriving workloads for performance testing

Alberto Avritzer; Elaine J. Weyuker

An approach is presented to compare the performance of an existing production platform and a proposed replacement architecture. The traditional approach to such a comparison is to develop software for the proposed platform, build the new architecture, and collect performance measurements on both the existing system in production and the new system in the development environment In this paper we propose a new way to design an application-independent workload for doing such a performance evaluation. We demonstrate the applicability of our approach by describing our experience using it to help an industrial organization determine whether or not a proposed architecture would be adequate to meet their organizations performance requirements.


ieee international software metrics symposium | 2002

A metric to predict software scalability

Elaine J. Weyuker; Alberto Avritzer

Software system scalability is an important issue for most businesses. It is essential that as the customer base increases, and therefore the system has to deal with significantly increased loads, the system is prepared to handle the increased traffic so that the users do not encounter unacceptable system performance. For this reason we introduce a new metric, the PNL metric, that can be used to predict the likely loads at which the probability of performance problems will exceed acceptable levels. A case study is described that demonstrates the application of the PNL metric to a large industrial software system. A description of the steps taken to model the software and collect data is provided, as well as the computation of the PNL metric and implications derived from the computation for this system. This information was used by the project to help plan for additional capacity so that the performance experienced by customers was likely to remain acceptable.

Collaboration


Dive into the Alberto Avritzer's collaboration.

Top Co-Authors

Avatar

Anne Koziolek

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lucia Kapova

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel Sadoc Menasché

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Edmundo de Souza e Silva

Federal University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Julius C. B. Leite

Federal Fluminense University

View shared research outputs
Top Co-Authors

Avatar

Rosa Maria Meri Leão

Federal University of Rio de Janeiro

View shared research outputs
Researchain Logo
Decentralizing Knowledge