Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tanja E. J. Vos is active.

Publication


Featured researches published by Tanja E. J. Vos.


automated software engineering | 2011

Symbolic search-based testing

Arthur I. Baars; Mark Harman; Youssef Hassoun; Kiran Lakhotia; Phil McMinn; Paolo Tonella; Tanja E. J. Vos

We present an algorithm for constructing fitness functions that improve the efficiency of search-based testing when trying to generate branch adequate test data. The algorithm combines symbolic information with dynamic analysis and has two key advantages: It does not require any change in the underlying test data generation technique and it avoids many problems traditionally associated with symbolic execution, in particular the presence of loops. We have evaluated the algorithm on industrial closed source and open source systems using both local and global search-based testing techniques, demonstrating that both are statistically significantly more efficient using our approach. The test for significance was done using a one-sided, paired Wilcoxon signed rank test. On average, the local search requires 23.41% and the global search 7.78% fewer fitness evaluations when using a symbolic execution based fitness function generated by the algorithm.


Software Quality Journal | 2013

Evolutionary functional black-box testing in an industrial setting

Tanja E. J. Vos; Felix F. Lindlar; Benjamin Wilmes; Andreas Windisch; Arthur I. Baars; Peter M. Kruse; Hamilton Gross; Joachim Wegener

During the past years, evolutionary testing research has reported encouraging results for automated functional (i.e. black-box) testing. However, despite promising results, these techniques have hardly been applied to complex, real-world systems and as such, little is known about their scalability, applicability, and acceptability in industry. In this paper, we describe the empirical setup used to study the use of evolutionary functional testing in industry through two case studies, drawn from serial production development environments at Daimler and Berner & Mattner Systemtechnik, respectively. Results of the case studies are presented, and research questions are assessed based on them. In summary, the results indicate that evolutionary functional testing in an industrial setting is both scalable and applicable. However, the creation of fitness functions is time-consuming. Although in some cases, this is compensated by the results, it is still a significant factor preventing functional evolutionary testing from more widespread use in industry.


Proceedings of the Eighth International Workshop on Search-Based Software Testing | 2015

Unit testing tool competition: round three

Urko Rueda; Tanja E. J. Vos; I.S.W.B. Prasetya

This paper describes the third round of the Java Unit Testing Tool Competition. This edition of the contest evaluates no less than seven automated testing tools! And, like during the second round, test suites written by human testers are also used for comparison. This paper contains the full results of the evaluation.


International Journal of Information System Modeling and Design | 2015

TESTAR: Tool Support for Test Automation at the User Interface Level

Tanja E. J. Vos; Peter M. Kruse; Nelly Condori-Fernandez; Sebastian Bauersfeld; Joachim Wegener

Testing applications with a graphical user interface GUI is an important, though challenging and time consuming task. The state of the art in the industry are still capture and replay tools, which may simplify the recording and execution of input sequences, but do not support the tester in finding fault-sensitive test cases and leads to a huge overhead on maintenance of the test cases when the GUI changes. In earlier works the authors presented the TESTAR tool, an automated approach to testing applications at the GUI level whose objective is to solve part of the maintenance problem by automatically generating test cases based on a structure that is automatically derived from the GUI. In this paper they report on their experiences obtained when transferring TESTAR in three different industrial contexts with decreasing involvement of the TESTAR developers and increasing participation of the companies when deploying and using TESTAR during testing. The studies were successful in that they reached practice impact, research impact and give insight into ways to do innovation transfer and defines a possible strategy for taking automated testing tools into the market.


international conference on software testing verification and validation workshops | 2013

Unit Testing Tool Competition

Sebastian Bauersfeld; Tanja E. J. Vos; Kiran Lakhotia; Simon M. Poulding; Nelly Condori

This paper describes the Java Unit Testing Tool Competition that ran in the context of the Search Based Software Testing (SBST) workshop at ICST 2013. It describes the main objective of the benchmark, the Java classes that were selected, the data that was collected, the tools that were used for data collection, the protocol that was carried out to execute the benchmark and how the final benchmark score for each participating tool can be calculated.


international conference on quality software | 2012

A Methodological Framework for Evaluating Software Testing Techniques and Tools

Tanja E. J. Vos; Beatriz Marín; M. J. Escalona; Alessandro Marchetto

There exists a real need in industry to have guidelines on what testing techniques use for different testing objectives, and how usable (effective, efficient, satisfactory) these techniques are. Up to date, these guidelines do not exist. Such guidelines could be obtained by doing secondary studies on a body of evidence consisting of case studies evaluating and comparing testing techniques and tools. However, such a body of evidence is also lacking. In this paper, we will make a first step towards creating such body of evidence by defining a general methodological evaluation framework that can simplify the design of case studies for comparing software testing tools, and make the results more precise, reliable, and easy to compare. Using this framework, (1) software testing practitioners can more easily define case studies through an instantiation of the framework, (2) results can be better compared since they are all executed according to a similar design, (3) the gap in existing work on methodological evaluation frameworks will be narrowed, and (4) a body of evidence will be initiated. By means of validating the framework, we will present successful applications of this methodological framework to various case studies for evaluating testing tools in an industrial environment with real objects and real subjects.


automated software engineering | 2012

GUITest: a Java library for fully automated GUI robustness testing

Sebastian Bauersfeld; Tanja E. J. Vos

Graphical User Interfaces (GUIs) are substantial parts of todays applications, no matter whether these run on tablets, smartphones or desktop platforms. Since the GUI is often the only component that humans interact with, it demands for thorough testing to ensure an efficient and satisfactory user experience. Being the glue between almost all of an applications components, GUIs also lend themselves for system level testing. However, GUI testing is inherently difficult and often involves great manual labor, even with modern tools which promise automation. This paper introduces a Java library called GUITest, which allows to generate fully automated GUI robustness tests for complex applications, without the need to manually generate models or input sequences. We will explain how it operates and present first results on its applicability and effectivity during a test involving Microsoft Word.


2014 IEEE 1st International Workshop on Requirements Engineering and Testing, RET 2014 - Proceedings | 2014

Towards the automated generation of abstract test cases from requirements models

Maria Fernanda Granda; Nelly Condori-Fernandez; Tanja E. J. Vos; Oscar Pastor

In a testing process, the design, selection, creation and execution of test cases is a very time-consuming and error-prone task when done manually, since suitable and effective test cases must be obtained from the requirements. This paper presents a model-driven testing approach for conceptual schemas that automatically generates a set of abstract test cases, from requirements models. In this way, tests and requirements are linked together to find defects as soon as possible, which can considerably reduce the risk of defects and project reworking. The authors propose a generation strategy which consists of: two meta-models, a set of transformations rules which are used to generate a Test Model, and the Abstract Test Cases from an existing approach to communication-oriented Requirements Engineering; and an algorithm based on Breadth-First Search. A practical application of our approach is included.


research challenges in information science | 2011

Towards testing future Web applications

Beatriz Marín; Tanja E. J. Vos; Giovanni Giachetti; Arthur I. Baars; Paolo Tonella

The current Web applications are in continuous evolution to provide new and more complex functionalities, which can improve the user experience by means of adaptivity and dynamic changes. Since testing is the most frequently used technique to evaluate the quality of software applications in industry, novel testing approaches will be necessary to evaluate the quality of future (and more complex) web applications. In this paper, we investigate the testing challenges of future web applications and propose a testing methodology that addresses these challenges by the integration of search-based testing, model-based testing, oracle learning, concurrency testing, combinatorial testing, regression testing, and coverage analysis. This paper also presents a testing metamodel that states testing concepts and their relationships, which are used as the theoretical basis of the proposed testing methodology.


international conference on testing software and systems | 2013

Unit Testing Tool Competitions - Lessons Learned

Sebastian Bauersfeld; Tanja E. J. Vos; Kiran Lakhotia

This paper reports about the two rounds of the Java Unit Testing Tool Competition that ran in the context of the Search Based Software Testing (SBST) workshop at ICST 2013 and the first Future Internet Testing (FITTEST) workshop at ICTSS 2013. It describes the main objectives of the benchmark, the Java classes that were selected in both competitions, the data that was collected, the tools that were used for data collection, the protocol that was carried out to execute the benchmark and how the final benchmark scores for each participating tool were calculated. Eventually, we discuss the challenges encountered during the events, what we learned and how we plan to improve our framework for future competitions.

Collaboration


Dive into the Tanja E. J. Vos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oscar Pastor

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arthur I. Baars

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Sebastian Bauersfeld

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Beatriz Marín

Diego Portales University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paolo Tonella

fondazione bruno kessler

View shared research outputs
Researchain Logo
Decentralizing Knowledge