Sahar Tahvili
Mälardalen University College
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sahar Tahvili.
international conference on software testing verification and validation workshops | 2016
Sahar Tahvili; Mehrdad Saadatmand; Stig Larsson; Wasif Afzal; Markus Bohlin; Daniel Sundmark
Prioritization, selection and minimization of test cases are well-known problems in software testing. Test case prioritization deals with the problem of ordering an existing set of test cases, typically with respect to the estimated likelihood of detecting faults. Test case selection addresses the problem of selecting a subset of an existing set of test cases, typically by discarding test cases that do not add any value in improving the quality of the software under test. Most existing approaches for test case prioritization and selection suffer from one or several drawbacks. For example, they to a large extent utilize static analysis of code for that purpose, making them unfit for higher levels of testing such as integration testing. Moreover, they do not exploit the possibility of dynamically changing the prioritization or selection of test cases based on the execution results of prior test cases. Such dynamic analysis allows for discarding test cases that do not need to be executed and are thus redundant. This paper proposes a generic method for prioritization and selection of test cases in integration testing that addresses the above issues. We also present the results of an industrial case study where initial evidence suggests the potential usefulness of our approach in testing a safety-critical train control management subsystem.
international conference on information technology new generations | 2016
Sahar Tahvili; Wasif Afzal; Mehrdad Saadatmand; Markus Bohlin; Daniel Sundmark; Stig Larsson
Software testing in industrial projects typically requires large test suites. Executing them is commonly expensive in terms of effort and wall-clock time. Indiscriminately executing all available test cases leads to sub-optimal exploitation of testing resources. Selecting too few test cases for execution on the other hand might leave a large number of faults undiscovered. Limiting factors such as allocated budget and time constraints for testing further emphasizes the importance of test case prioritization in order to identify test cases that enable earlier detection of faults while respecting such constraints. This paper introduces a novel method prioritizing test cases to detect faults earlier. The method combines TOPSIS decision making with fuzzy principles. The method is based on multi-criteria like fault detection probability, execution time, or complexity. Applying the method in an industrial context for testing a train control management subsystem from Bombardier Transportation in Sweden shows its practical benefit.
international conference on information technology: new generations | 2015
Mehrdad Saadatmand; Sahar Tahvili
One of the main challenges in addressing Non-Functional Requirements (NFRs) in designing systems is to take into account their interdependencies and mutual impacts. For this reason, they cannot be considered in isolation and a careful balance and tradeoff among them should be established. This makes it a difficult task to select design decisions and features that lead to the satisfaction of all different NFRs in the system, which becomes even more difficult when the complexity of a system grows. In this paper, we introduce an approach based on fuzzy logic and decision support systems that helps to identify different design alternatives that lead to higher overall satisfaction of NFRs in the system. This is achieved by constructing a model of the NFRs and then performing analysis on the model. To build the model, we use a modified version of the NFR UML profile which we have introduced in our previous works, and using model transformation techniques we automate the analysis of the model.
software engineering and advanced applications | 2017
Sahar Tahvili; Mehrdad Saadatmand; Markus Bohlin; Wasif Afzal; Sharvathul Hasan Ameerjan
Knowing the execution time of test cases is importantto perform test scheduling, prioritization and progressmonitoring. This work in progress paper presents a novelapproach for predicting the execution time of test cases basedon test specifications and available historical data on previouslyexecuted test cases. Our approach works by extractingtiming information (measured and maximum execution time)for various steps in manual test cases. This information is thenused to estimate the maximum time for test steps that have notpreviously been executed, but for which textual specificationsexist. As part of our approach, natural language parsing ofthe specifications is performed to identify word combinationsto check whether existing timing information on various testactivities is already available or not. Finally, linear regressionis used to predict the actual execution time for test cases. A proof-of-concept use case at Bombardier Transportationserves to evaluate the proposed approach.
product focused software process improvement | 2016
Sahar Tahvili; Markus Bohlin; Mehrdad Saadatmand; Stig Larsson; Wasif Afzal; Daniel Sundmark
In software system development, testing can take considerable time and resources, and there are numerous examples in the literature of how to improve the testing process. In particular, methods for selection and prioritization of test cases can play a critical role in efficient use of testing resources. This paper focuses on the problem of selection and ordering of integration-level test cases. Integration testing is performed to evaluate the correctness of several units in composition. Further, for reasons of both effectiveness and safety, many embedded systems are still tested manually. To this end, we propose a process, supported by an online decision support system, for ordering and selection of test cases based on the test result of previously executed test cases. To analyze the economic efficiency of such a system, a customized return on investment (ROI) metric tailored for system integration testing is introduced. Using data collected from the development process of a large-scale safety-critical embedded system, we perform Monte Carlo simulations to evaluate the expected ROI of three variants of the proposed new process. The results show that our proposed decision support system is beneficial in terms of ROI at system integration testing and thus qualifies as an important element in improving the integration testing process.
10TH INTERNATIONAL CONFERENCE ON MATHEMATICAL PROBLEMS IN ENGINEERING, AEROSPACE AND SCIENCES: ICNPAA 2014 Conference date: 15–18 July 2014 Location: Narvik, Norway | 2014
Sahar Tahvili; Jonas Österberg; Sergei Silvestrov; Jonas Biteus
One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation.
2018 IEEE/ACM 5th International Workshop on Requirements Engineering and Testing (RET) | 2018
Sahar Tahvili; Leo Hatvani; Michael Felderer; Wasif Afzal; Mehrdad Saadatmand; Markus Bohlin
One of the challenging issues in improving the test efficiency is that of achieving a balance between testing goals and testing resources. Test execution scheduling is one way of saving time and budget, where a set of test cases are grouped and tested at the same time. To have an optimal test execution schedule, all related information of a test case (e.g. execution time, functionality to be tested, dependency and similarity with other test cases) need to be analyzed. Test scheduling problem becomes more complicated at high-level testing, such as integration testing and especially in manual testing procedure. Test specifications are generally written in natural text by humans and usually contain ambiguity and uncertainty. Therefore, analyzing a test specification demands a strong learning algorithm. In this position paper, we propose a natural language processing-based approach that, given test specifications at the integration level, allows automatic detection of test cases semantic dependencies. The proposed approach utilizes the Doc2Vec algorithm and converts each test case into a vector in n-dimensional space. These vectors are then grouped using the HDBSCAN clustering algorithm into semantic clusters. Finally, a set of cluster-based test scheduling strategies are proposed for execution. The proposed approach has been applied in a sub-system from the railway domain by analyzing an ongoing testing project at Bombardier Transportation AB, Sweden.
international conference on software engineering advances | 2015
Sahar Tahvili; Mehrdad Saadatmand; Markus Bohlin
Archive | 2016
Sahar Tahvili
international symposium on intelligent signal processing and communication systems | 2014
Tofigh Allahviranloo; Arjan Skuka; Sahar Tahvili