Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrícia D. L. Machado is active.

Publication


Featured researches published by Patrícia D. L. Machado.


mathematical foundations of computer science | 2002

Unit Testing for Casl Architectural Specifications

Patrícia D. L. Machado; Donald Sannella

The problem of testing modular systems against algebraic specifications is discussed. We focus on systems where the decomposition into parts is specified by a Casl-style architectural specification and the parts (units) are developed separately, perhaps by an independent supplier. We consider how to test such units without reference to their context of use. This problem is most acute for generic units where the particular instantiation cannot be predicted.


Software Testing, Verification & Reliability | 2011

On the use of a similarity function for test case selection in the context of model-based testing

Emanuela Gadelha Cartaxo; Patrícia D. L. Machado; Francisco Gomes de Oliveira Neto

Test case selection in model‐based testing is discussed focusing on the use of a similarity function. Automatically generated test suites usually have redundant test cases. The reason is that test generation algorithms are usually based on structural coverage criteria that are applied exhaustively. These criteria may not be helpful to detect redundant test cases as well as the suites are usually impractical due to the huge number of test cases that can be generated. Both problems are addressed by applying a similarity function. The idea is to keep in the suite the less similar test cases according to a goal that is defined in terms of the intended size of the test suite. The strategy presented is compared with random selection by considering transition‐based and fault‐based coverage. The results show that, in most of the cases, similarity‐based selection can be more effective than random selection when applied to automatically generated test suites. Copyright


systems, man and cybernetics | 2007

Test case generation by means of UML sequence diagrams and labeled transition systems

Emanuela Gadelha Cartaxo; Francisco Gomes de Oliveira Neto; Patrícia D. L. Machado

We present a systematic procedure of functional test case generation for feature testing of mobile phone applications. A feature is an increment of functionality, usually with a coherent purpose that is added on top of a basic system. Feature are usually developed and tested separately from the basic system as independent modules. The procedure is based on model-based testing techniques with test cases generated from UML sequence diagrams translated into labeled transition systems (LTSs). A case study is presented to illustrate the application of the procedure. The work is part of a research initiative for automation of test case generation, selection and evaluation of Motorola mobile phone applications.


international conference on software engineering | 2006

GridUnit: software testing on the grid

Alexandre Duarte; Walfredo Cirne; Francisco Vilar Brasileiro; Patrícia D. L. Machado

Software testing is a fundamental part of system development. As software grows, its test suite becomes larger and its execution time may become a problem to software developers. This is especially the case for agile methodologies, which preach a short develop/test cycle. Moreover, due to the increasing complexity of systems, there is the need to test software in a variety of environments. In this paper, we introduce GridUnit, an extension of the widely adopted JUnit testing framework, able to automatically distribute the execution of software tests on a computational grid with minimum user intervention. Experiments conducted with this solution have showed a speed-up of almost 70x, reducing the duration of the test phase of a synthetic application from 24 hours to less than 30 minutes. The solution does not require any source-code modification, hides the grid complexity from the user and provides a cost-effectiveness improvement to the software testing experience.


algebraic methodology and software technology | 2000

Testing from Structured Algebraic Specifications

Patrícia D. L. Machado

This paper deals with testing from structured algebraic specifications expressed in first-order logic. The issue investigated is the so-called oracle problem, that is, whether a finite and executable procedure can be defined for interpreting the results of tests. For flat specifications, the oracle problem often reduces to the problem of comparing values of a non-observable sort and how to deal with quantifiers. However, specification-building operations introduce an additional barrier to this problem which can restrict the way specifications and test suites are defined. In this paper, we present a framework for testing from structured specifications and a thorough discussion of the problems which can arise together with proposed solutions.


acm symposium on applied computing | 2008

LTS-BT: a tool to generate and select functional test cases for embedded systems

Emanuela Gadelha Cartaxo; Wilkerson de L. Andrade; Francisco Gomes de Oliveira Neto; Patrícia D. L. Machado

Automation of model-based testing for embedded systems is discussed. The focus is on feature and feature interruption test case generation and selection from behavioral specifications. For this, the LTS-BT tool is presented. The tool has been designed to suit embedded systems by focusing on selected notations for behavior specification and tailored techniques for test case generation and selection. This is motivated by the particularities of these systems that challenge cost-effective testing.


ACM Sigsoft Software Engineering Notes | 2006

Generating interaction test cases for mobile phone systems from use case specifications

André L. L. de Figueiredo; Wilkerson de L. Andrade; Patrícia D. L. Machado

The mobile phone market has become even more competitive, demanding high quality standards. In this context, applications are built as sets of functionalities, called features. Such features are combined in use scenarios of the application. Due to the fact that the features are usually developed in isolation, the tests of their interactions in such scenarios are compromised. In this paper, we present a proposal of specifying feature interaction requirements with use cases; generating a behavioral model from such specification; and a strategy for generating test cases from the behavioral model that aims to extract feature interaction scenarios in such a way that interactions can be tested.


Electronic Notes in Theoretical Computer Science | 2007

Towards Property Oriented Testing

Patrícia D. L. Machado; Daniel Aguiar da Silva; Alexandre Mota

Conformance testing is a kind of functional testing where a formally verified specification is considered and test cases are generated so that conclusions can be established regarding the possibility of acceptance/rejection of conforming/non-conforming implementations. If the focus is on a complete specification, test suites may be impractical and even infinite with unclear relations between test cases and the specification. Property oriented testing focuses on particular, eventually critical, properties of interest. The specification of one or more properties drives the test process that checks whether they are satisfied by an implementation. Properties are often stated as test purposes, targeting testing at a particular functionality. This paper presents an overview of approaches to property oriented testing for reactive systems, focusing on labelled and symbolic transition systems.


Journal of Software Engineering Research and Development | 2015

Revealing influence of model structure and test case profile on the prioritization of test cases in the context of model-based testing

João Felipe Silva Ouriques; Emanuela Gadelha Cartaxo; Patrícia D. L. Machado

BackgroundTest case prioritization techniques aim at defining an order of test cases that favor the achievement of a goal during test execution, such as revealing failures as earlier as possible. A number of techniques have already been proposed and investigated in the literature and experimental results have discussed whether a technique is more successful than others. However, in the context of model-based testing, only a few attempts have been made towards either proposing or experimenting test case prioritization techniques. Moreover, a number of factors that may influence on the results obtained still need to be investigated before more general conclusions can be reached.MethodsIn order to evaluate factors that potentially affect the performance of test case prioritization techniques, we perform three empirical studies, an exploratory one and two experiments. The first study focus on expose the techniques to a common and fair environment, since the investigated techniques have never been studied together, and observe their behavior. The following two experiments aim at observing the effects of two factors: the structure of the model and the profile of the test cases that fail. We designed the experiments using the one-factor-at-a-time strategy.ResultsThe first study suggests that the investigated techniques performs differently, however other factors, aside from the test suites and number of failures, affect the techniques, motivating further investigation. As results from the two experiments, on one hand, the model structure do not affect significantly the investigated techniques. On the other hand, we are able to state that the profile of the test case that fails may have a definite influence on the performance of the techniques investigated.ConclusionsThrough these studies, we conclude that, a fair evaluation involving test case prioritization techniques must take into account, in addition to the techniques and the test suites, different characteristics of the test cases that fail as variable.


systems, man and cybernetics | 2007

Component-based integration testing from UML interaction diagrams

Patrícia D. L. Machado; Jorge C. A. de Figueiredo; Emerson F. A. Lima; Ana Esther Victor Barbosa; Helton S. Lima

An integration testing method for component- based software is presented. The method, based on the widely used UML (Unified Modelling Language) notation, covers a complete integration testing process at a contractual component level and it is supported by the use of tools. Components and their interfaces are specified by using UML diagrams and OCL (Object Constraint Language) constraints. Software under test is built from composition of components in a standard component-based software development process. A case study that is implemented in the Java language is presented to illustrate the application of the method.

Collaboration


Dive into the Patrícia D. L. Machado's collaboration.

Top Co-Authors

Avatar

Wilkerson de L. Andrade

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Emanuela Gadelha Cartaxo

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Everton L. G. Alves

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Franklin Ramalho

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Tiago Massoni

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

João Felipe Silva Ouriques

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard Torkar

University of Gothenburg

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge