Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cu D. Nguyen is active.

Publication


Featured researches published by Cu D. Nguyen.


international symposium on software testing and analysis | 2012

Combining model-based and combinatorial testing for effective test case generation

Cu D. Nguyen; Alessandro Marchetto; Paolo Tonella

Model-based testing relies on the assumption that effective adequacy criteria can be defined in terms of model coverage achieved by a set of test paths. However, such test paths are only abstract test cases and input test data must be specified to make them concrete. We propose a novel approach that combines model-based and combinatorial testing in order to generate executable and effective test cases from a model. Our approach starts from a finite state model and applies model-based testing to generate test paths that represent sequences of events to be executed against the system under test. Such paths are transformed to classification trees, enriched with domain input specifications such as data types and partitions. Finally, executable test cases are generated from those trees using t-way combinatorial criteria. While test cases that satisfy a combinatorial criterion can be generated for each individual test path obtained from the model, we introduce a post-optimization algorithm that can guarantee the combinatorial criterion of choice on the whole set of test paths extracted from the model. The resulting test suite is smaller, but it still satisfies the same adequacy criterion. We developed a tool and used it to evaluate our approach on 6 subject systems of various types and sizes, to study the effectiveness of the generated test suites, the reduction achieved by the post-optimization algorithm, as well as the effort required to produce them.


adaptive agents and multi agents systems | 2009

Evolutionary testing of autonomous software agents

Cu D. Nguyen; Anna Perini; Paolo Tonella; Simon Miles; Mark Harman; Michael Luck

A system built in terms of autonomous software agents may require even greater correctness assurance than one that is merely reacting to the immediate control of its users. Agents make substantial decisions for themselves, so thorough testing is an important consideration. However, autonomy also makes testing harder; by their nature, autonomous agents may react in different ways to the same inputs over time, because, for instance they have changeable goals and knowledge. For this reason, we argue that testing of autonomous agents requires a procedure that caters for a wide range of test case contexts, and that can search for the most demanding of these test cases, even when they are not apparent to the agents’ developers. In this paper, we address this problem, introducing and evaluating an approach to testing autonomous agents that uses evolutionary optimisation to generate demanding test cases. We propose a methodology to derive objective (fitness) functions that drive evolutionary algorithms, and evaluate the overall approach with two simulated autonomous agents. The obtained results show that our approach is effective in finding good test cases automatically.


international conference on web services | 2011

Test Case Prioritization for Audit Testing of Evolving Web Services Using Information Retrieval Techniques

Cu D. Nguyen; Alessandro Marchetto; Paolo Tonella

Web services evolve frequently to meet new business demands and opportunities. However, service changes may affect service compositions that are currently consuming the services. Hence, audit testing (a form of regression testing in charge of checking for compatibility issues) is needed. As service compositions are often in continuous operation and the external services have limited (expensive) access when invoked for testing, audit testing has severe time and resources constraints, which make test prioritization a crucial technique (only the highest priority test cases will be executed).This paper presents a novel approach to the prioritization of audit test cases using information retrieval. This approach matches a service change description with the code portions exercised by the relevant test cases. So, test cases are prioritized based on their relevance to the service change. We evaluate the proposed approach on a system that composes services from eBay and Google.


International Journal of Agent-oriented Software Engineering | 2010

Goal-oriented testing for MASs

Cu D. Nguyen; Anna Perini; Paolo Tonella

As Multi-Agent Systems (MASs) are increasingly applied in complex distributed applications such as financial and healthcare services, assurance needs to be given to the user that the implemented MASs operate properly, i.e., they meet their specifications and the stakeholders expectations. Testing is an important and widely applied technique in practice to reach this goal. Current Agent-Oriented Software Engineering (AOSE) methodologies address testing only partially. Some of them exploit Object-Oriented (OO) testing techniques, based on a mapping of agent-oriented abstractions into OO constructs; this may require additional development effort and introduce anomalies. Moreover, a structured testing process for AOSE methodologies is still missing. In this paper we introduce a testing methodology, called Goal-Oriented Software Testing (GOST), for MAS. The methodology specifies a testing process that complements goal-oriented analysis and design. Furthermore, GOST provides a systematic way of deriving test cases from goal-oriented specifications and techniques to automate test case generation and their execution.


international symposium on software testing and analysis | 2014

Automated testing for SQL injection vulnerabilities: an input mutation approach

Dennis Appelt; Cu D. Nguyen; Lionel C. Briand; Nadia Alshahwan

Web services are increasingly adopted in various domains, from finance and e-government to social media. As they are built on top of the web technologies, they suffer also an unprecedented amount of attacks and exploitations like the Web. Among the attacks, those that target SQL injection vulnerabilities have consistently been top-ranked for the last years. Testing to detect such vulnerabilities before making web services public is crucial. We present in this paper an automated testing approach, namely μ4SQLi, and its underpinning set of mutation operators. μ4SQLi can produce effective inputs that lead to executable and harmful SQL statements. Executability is key as otherwise no injection vulnerability can be exploited. Our evaluation demonstrated that the approach is effective to detect SQL injection vulnerabilities and to produce inputs that bypass application firewalls, which is a common configuration in real world.


foundations of software engineering | 2013

Automated oracles: an empirical study on cost and effectiveness

Cu D. Nguyen; Alessandro Marchetto; Paolo Tonella

Software testing is an effective, yet expensive, method to improve software quality. Test automation, a potential way to reduce testing cost, has received enormous research attention recently, but the so-called “oracle problem” (how to decide the PASS/FAIL outcome of a test execution) is still a major obstacle to such cost reduction. We have extensively investigated state-of-the-art works that contribute to address this problem, from areas such as specification mining and model inference. In this paper, we compare three types of automated oracles: Data invariants, Temporal invariants, and Finite State Automata. More specifically, we study the training cost and the false positive rate; we evaluate also their fault detection capability. Seven medium to large, industrial application subjects and real faults have been used in our empirical investigation.


Agent-Oriented Software Engineering IX | 2009

Experimental Evaluation of Ontology-Based Test Generation for Multi-agent Systems

Cu D. Nguyen; Anna Perini; Paolo Tonella

Software agents are a promising technology for todays complex, distributed systems. Methodologies and techniques that address testing and reliability of multi agent systems are increasingly demanded, in particular to support automated test case generation and execution. A novel approach, based on agent interaction ontology, has been recently proposed and integrated into a testing framework, called eCAT , which can generate and evolve test cases automatically, and run them continuously. In this paper, we focus on the experimental evaluation of an ontology-based test generation approach. We use two BDI agent applications as case studies to investigate the performance of the framework as well as its capability to reveal faults.


international conference on software engineering | 2014

Interpolated n-grams for model based testing

Paolo Tonella; Roberto Tiella; Cu D. Nguyen

Models - in particular finite state machine models - provide an invaluable source of information for the derivation of effective test cases. However, models usually approximate part of the program semantics and capture only some of the relevant dependencies and constraints. As a consequence, some of the test cases that are derived from models are infeasible. In this paper, we propose a method, based on the computation of the N-gram statistics, to increase the likelihood of deriving feasible test cases from a model. Correspondingly, the level of model coverage is also expected to increase, because infeasible test cases do not contribute to coverage. While N-grams do improve existing test case derivation methods, they show limitations when the N-gram statistics is incomplete, which is expected to necessarily occur as N increases. Interpolated N-grams overcome such limitation and show the highest performance of all test case derivation methods compared in this work.


international conference on software engineering | 2012

An empirical study about the effectiveness of debugging when random test cases are used

Mariano Ceccato; Alessandro Marchetto; Leonardo Mariani; Cu D. Nguyen; Paolo Tonella

Automatically generated test cases are usually evaluated in terms of their fault revealing or coverage capability. Beside these two aspects, test cases are also the major source of information for fault localization and fixing. The impact of automatically generated test cases on the debugging activity, compared to the use of manually written test cases, has never been studied before. In this paper we report the results obtained from two controlled experiments with human subjects performing debugging tasks using automatically generated or manually written test cases. We investigate whether the features of the former type of test cases, which make them less readable and understandable (e.g., unclear test scenarios, meaningless identifiers), have an impact on accuracy and efficiency of debugging. The empirical study is aimed at investigating whether, despite the lack of readability in automatically generated test cases, subjects can still take advantage of them during debugging.


ACM Transactions on Software Engineering and Methodology | 2015

Do Automatically Generated Test Cases Make Debugging Easier? An Experimental Assessment of Debugging Effectiveness and Efficiency

Mariano Ceccato; Alessandro Marchetto; Leonardo Mariani; Cu D. Nguyen; Paolo Tonella

Several techniques and tools have been proposed for the automatic generation of test cases. Usually, these tools are evaluated in terms of fault-revealing or coverage capability, but their impact on the manual debugging activity is not considered. The question is whether automatically generated test cases are equally effective in supporting debugging as manually written tests. We conducted a family of three experiments (five replications) with humans (in total, 55 subjects) to assess whether the features of automatically generated test cases, which make them less readable and understandable (e.g., unclear test scenarios, meaningless identifiers), have an impact on the effectiveness and efficiency of debugging. The first two experiments compare different test case generation tools (Randoop vs. EvoSuite). The third experiment investigates the role of code identifiers in test cases (obfuscated vs. original identifiers), since a major difference between manual and automatically generated test cases is that the latter contain meaningless (obfuscated) identifiers. We show that automatically generated test cases are as useful for debugging as manual test cases. Furthermore, we find that, for less experienced developers, automatic tests are more useful on average due to their lower static and dynamic complexity.

Collaboration


Dive into the Cu D. Nguyen's collaboration.

Top Co-Authors

Avatar

Paolo Tonella

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Perini

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dennis Appelt

University of Luxembourg

View shared research outputs
Top Co-Authors

Avatar

Mark Harman

University College London

View shared research outputs
Top Co-Authors

Avatar

Sadeeq Jan

University of Luxembourg

View shared research outputs
Top Co-Authors

Avatar

Leonardo Mariani

University of Milano-Bicocca

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kiran Lakhotia

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge