Francisco Gomes de Oliveira Neto
University of Gothenburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Hotspot
Dive into the research topics where Francisco Gomes de Oliveira Neto is active.
Publication
Featured researches published by Francisco Gomes de Oliveira Neto.
Software Testing, Verification & Reliability | 2011
Emanuela Gadelha Cartaxo; Patrícia D. L. Machado; Francisco Gomes de Oliveira Neto
Test case selection in model‐based testing is discussed focusing on the use of a similarity function. Automatically generated test suites usually have redundant test cases. The reason is that test generation algorithms are usually based on structural coverage criteria that are applied exhaustively. These criteria may not be helpful to detect redundant test cases as well as the suites are usually impractical due to the huge number of test cases that can be generated. Both problems are addressed by applying a similarity function. The idea is to keep in the suite the less similar test cases according to a goal that is defined in terms of the intended size of the test suite. The strategy presented is compared with random selection by considering transition‐based and fault‐based coverage. The results show that, in most of the cases, similarity‐based selection can be more effective than random selection when applied to automatically generated test suites. Copyright
systems, man and cybernetics | 2007
Emanuela Gadelha Cartaxo; Francisco Gomes de Oliveira Neto; Patrícia D. L. Machado
We present a systematic procedure of functional test case generation for feature testing of mobile phone applications. A feature is an increment of functionality, usually with a coherent purpose that is added on top of a basic system. Feature are usually developed and tested separately from the basic system as independent modules. The procedure is based on model-based testing techniques with test cases generated from UML sequence diagrams translated into labeled transition systems (LTSs). A case study is presented to illustrate the application of the procedure. The work is part of a research initiative for automation of test case generation, selection and evaluation of Motorola mobile phone applications.
Proceedings of the 1st International Workshop on Combining Modelling and Search-Based Software Engineering | 2013
Francisco Gomes de Oliveira Neto; Robert Feldt; Richard Torkar; Patrícia D. L. Machado
Modeling and abstraction is key in all engineering processes and have found extensive use also in software engineering. When developing new methodologies and techniques to support software engineers we want to evaluate them on realistic models. However, this is a challenge since (1) it is hard to get industry to give access to their models, and (2) we need a large number of models to systematically evaluate a technology. This paper proposes that search-based techniques can be used to search for models with desirable properties, which can then be used to systematically evaluate model-based technologies. By targeting properties seen in industrial models we can then get the best of both worlds: models that are similar to models used in industry but in quantities that allow extensive experimentation. To exemplify our ideas we consider a specific case in which a model generator is used to create models to test a regression test optimization technique.
Information & Software Technology | 2016
Francisco Gomes de Oliveira Neto; Richard Torkar; Patrícia D. L. Machado
Context: This paper presents the similarity approach for regression testing (SART), where a similarity-based test case selection technique (STCS) is used in a model-based testing process to provide selection of test cases exercising modified parts of a specification model. Unlike other model-based regression testing techniques, SART relies on similarity analysis among test cases to identify modifications, instead of comparing models, hence reducing the dependency on specific types of model.Objective: To present convincing evidence of the usage of similarity measures for modification-traversing test case selection.Method: We investigate SART in a case study and an experiment. The case study uses artefacts from industry and should be seen as a sanity check of SART, while the experiment focuses on gaining statistical power through the generation of synthetical models in order to provide convincing evidence of SARTs effectiveness. Through posthoc analysis we obtain p-values and effect sizes to observe statistically significant differences between treatments with respect to transition and modification coverage.Results: The case study with industrial artefacts revealed that SART is able to uncover the same number of defects as known similarity-based test case selection techniques. In turn, the experiment shows that SART, unlike the other investigated techniques, presents 100% modification coverage. In addition, all techniques covered a similar percentage of model transitions.Conclusions: In summary, not only does SART provide transition and defect coverage equal to known STCS techniques, but it exceeds greatly in covering modified parts of the specification model, being a suitable candidate for model-based regression testing.
automation of software test | 2018
Francisco Gomes de Oliveira Neto; Azeem Ahmad; Ola Leifler; Kristian Sandahl; Eduard Paul Enoiu
Automated testing is an essential component of Continuous Integration (CI) and Delivery (CD), such as scheduling automated test sessions on overnight builds. That allows stakeholders to execute entire test suites and achieve exhaustive test coverage, since running all tests is often infeasible during work hours, i.e., in parallel to development activities. On the other hand, developers also need test feedback from CI servers when pushing changes, even if not all test cases are executed. In this paper we evaluate similarity-based test case selection (SBTCS) on integration-level tests executed on continuous integration pipelines of two companies. We select test cases that maximise diversity of test coverage and reduce feedback time to developers. Our results confirm existing evidence that SBTCS is a strong candidate for test optimisation, by reducing feedback time (up to 92% faster in our case studies) while achieving full test coverage using only information from test artefacts themselves.
arXiv: Software Engineering | 2018
Robert Feldt; Francisco Gomes de Oliveira Neto; Richard Torkar
As Artificial Intelligence (AI) techniques become more powerful and easier to use they are increasingly deployed as key components of modern software systems. While this enables new functionality and often allows better adaptation to user needs it also creates additional problems for software engineers and exposes companies to new risks. Some work has been done to better understand the interaction between Software Engineering and AI but we lack methods to classify ways of applying AI in software systems and to analyse and understand the risks this poses. Only by doing so can we devise tools and solutions to help mitigate them. This paper presents the AI in SE Application Levels (AI-SEAL) taxonomy that categorises applications according to their point of application, the type of AI technology used and the automation level allowed. We show the usefulness of this taxonomy by classifying 15 papers from previous editions of the RAISE workshop. Results show that the taxonomy allows classification of distinct AI applications and provides insights concerning the risks associated with them. We argue that this will be important for companies in deciding how to apply AI in their software applications and to create strategies for its use.
Archive | 2017
Richard Torkar; Robert Feldt; Francisco Gomes de Oliveira Neto; Lucas Gren
international conference on software testing verification and validation workshops | 2018
Michael Felderer; Bogdan Marculescu; Francisco Gomes de Oliveira Neto; Robert Feldt; Richard Torkar
arXiv: Software Engineering | 2018
Francisco Gomes de Oliveira Neto; Robert Feldt; Linda Erlenhov; José Benardi de Souza Nunes
arXiv: Software Engineering | 2018
Francisco Gomes de Oliveira Neto; Richard Torkar; Robert Feldt; Lucas Gren; Carlo A. Furia; Ziwei Huang