Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nashat Mansour is active.

Publication


Featured researches published by Nashat Mansour.


acs ieee international conference on computer systems and applications | 2005

Testing Web services

Reda Siblini; Nashat Mansour

Summary form only given. Web services present a promising software technology, which provides application-to-application interaction. They are based on communication protocols, service descriptions, and service discovery and are built on top of existing Web protocols and based on open XML standards. Web services are described using Web Services Description Language (WSDL), and the universal description, discovery, and integration directory provide a registry of Web services descriptions. Testing Web services is important for both the Web service provider and the Web service user. This paper proposes a technique for testing Web services using mutation analysis. The technique is based on applying mutation operators to the WSDL document in order to generate mutated Web service interfaces that are used to test the Web service. For this purpose, we define mutant operators that are specific to WSDL documents. Our empirical results have shown the usefulness of this technique.


Software Quality Journal | 2004

Data Generation for Path Testing

Nashat Mansour; Miran Salame

We present two stochastic search algorithms for generating test cases that execute specified paths in a program. The two algorithms are: a simulated annealing algorithm (SA), and a genetic algorithm (GA). These algorithms are based on an optimization formulation of the path testing problem which include both integer- and real-value test cases. We empirically compare the SA and GA algorithms with each other and with a hill-climbing algorithm, Korels algorithm (KA), for integer-value-input subject programs and compare SA and GA with each other on real-value subject programs. Our empirical work uses several subject programs with a number of paths. The results show that: (a) SA and GA are superior to KA in the number of executed paths, (b) SA tends to perform slightly better than GA in terms of the number of executed paths, and (c) GA is faster than SA; however, KA, when it succeeds in finding the solution, is the fastest.


European Journal of Operational Research | 1999

A distributed genetic algorithm for deterministic and stochastic labor scheduling problems

Fred F. Easton; Nashat Mansour

Abstract A recurring operational decision in many service organizations is determining the number of employees, and their work schedules, that minimize labor expenses and expected opportunity costs. These decisions have been modeled as generalized set covering (GSC) problems, deterministic goal programs (DGP), and stochastic goal programs (SGP); each a challenging optimization problem. The pervasiveness and economic significance of these three problems has motivated ongoing development and refinement of heuristic solution procedures. In this paper we present a unified formulation for these three labor scheduling problems and introduce a distributed genetic algorithm (DGA) that solves each of them. Our distributed genetic algorithm operates in parallel on a network of message-passing workstations. Separate subpopulations of solutions evolve independently on each processor but occasionally, the fittest solutions migrate over the network to join neighboring subpopulations. With its standard genetic operators, DGA frequently produces infeasible offspring. A few of these are repaired before they enter the population. However, most enter the population as-is, carrying an appropriate fitness penalty. This allows DGA to exploit potentially favorable adaptations that might be present in infeasible solutions while orienting the locus of the search near the feasible region. We applied the DGA to suites of published test problems for GSC, DGP, and SGP formulations and compared its performance with alternative solution procedures, including other metaheuristics such as simulated annealing and tabu search. We found that DGA outperformed the competing alternatives in terms of mean error, maximum error, and percentage of least cost solutions. While DGA is computationally intensive, the quality of its solutions is commensurate with the effort expended. In plots of solution quality versus CPU time for the various algorithms evaluated in our study, DGA consistently appeared on the efficient frontier.


acm symposium on applied computing | 2001

Regression testing of database applications

Ramzi A. Haraty; Nashat Mansour; Bassel Daou

Database applications features such as SQL, exception programming, integrity constraints, and table triggers pose some difficulties for maintenance activities, especially for regression testing that follows modifications to database applications. In this work, we address these difficulties and propose a two-phase regression testing methodology. In Phase 1, we explore control flow and data flow analysis issues of database applications. Then, we propose an impact analysis technique that is based on dependencies that exist among the components of database applications. This analysis leads to selecting test cases from the initial test suite for regression testing the modified application. In Phase 2, further reduction in the regression test cases is performed by using reduction algorithms. We present two such algorithms. Finally, a maintenance environment for database applications is described. Our experience with the environment prototype shows promising results.


Journal of Software: Evolution and Process | 1999

Simulated annealing and genetic algorithms for optimal regression testing

Nashat Mansour; Khalid El-Fakih

The optimal regression testing problem is one of determining the minimum number of test cases needed for revalidating modified software in the maintenance phase. We present two natural optimization algorithms, namely, a simulated annealing and a genetic algorithm, for solving this problem. The algorithms are based on an integer programming problem formulation and the programs control flow graph. The main advantage of these algorithms, in comparison with exact algorithms, is that they do not suffer from an exponential explosion for realistic program sizes. The experimental results, which include a comparison with previous algorithms, show that the simulated annealing and genetic algorithms find the optimal or near-optimal number of retests within a reasonable time. Copyright


acs/ieee international conference on computer systems and applications | 2006

Regression Testing Web Services-based Applications

Abbas Tarhini; Hacène Fouchal; Nashat Mansour

Web applications can be composed of heterogeneous selfcontained web services. Such applications are usually modified to fix errors or to enhance their functionality. After modifications, regression testing is essential to ensure that modifications do not lead to adverse effects. In this paper, we present a safe regression testing algorithm that selects an adequate number of non-redundant test sequences aiming to find modification-related errors. In our technique, a web application and the behavior of its composed components are specified by a two-level abstract model represented as a Timed Labeled Transition System. Our algorithm selects every test sequence that corresponds to a different behavior in the modified system. We discuss three situations for applying this algorithm: (1) connecting to a newly established web service that fulfills a composed web service, (2) adding or removing an operation in any of the composed web services, (3) modifying the specification of the web application. Moreover, modifications handled by the algorithm are classified into three classes: (a) adding an operation, (b) deleting an operation, (c) fixing a condition or an action. Key-words : label transition systems, testing, verification, web service, web application.


international conference on supercomputing | 1993

Graph contraction for physical optimization methods: a quality-cost tradeoff for mapping data on parallel computers

Nashat Mansour; Ravi Ponnusamy; Alok N. Choudhary; Geoffrey C. Fox

Mapping data to parallel computers aims at minimizing the execution time of the associated application. However, it can take an unacceptable amount of time in comparison with the execution time of the application if the size of the problem is large. In this paper, first we motivate the case for graph contraction as a means for reducing the problem size. We restrict our discussion to applications where the problem domain can be described using a graph (e.g., computational fluid dynamics applications). Then we present a mapping-oriented Parallel Graph Contraction (PGC) heuristic algorithm that yields a smaller representation of the problem to which mapping is then applied. The mapping solution for the original problem is obtained by a straight-forward interpolation. We then present experimental results on using contracted graphs as inputs to two physical optimization methods; namely, Genetic Algorithm and Simulated Annealing. The experimental results show that the PGC algorithm still leads to a reasonably good quality mapping solutions to the original problem, while producing a substantial reduction in mapping time. Finally, we discuss the cost-quality tradeoffs in performing graph contraction.


Journal of Systems and Software | 2001

Empirical comparison of regression test selection algorithms

Nashat Mansour; Rami Bahsoon; Ghinwa Baradhi

In the maintenance phase, the regression test selection problem refers to selecting test cases from the initial suite of test cases used in the development phase. In this paper, we empirically compare five representative regression test selection algorithms, which include: Simulated Annealing, Reduction, Slicing, Dataflow, and Firewall algorithms. The comparison is based on eight quantitative and qualitative criteria. These criteria are: number of selected test cases, execution time, precision, inclusiveness, preprocessing requirements, type of maintenance, level of testing, and type of approach. The empirical results show that the five algorithms can be used for different requirements of regression testing. For example the Simulated Annealing algorithm can be used for emergency non-safety-critical maintenance situations with a large number of small modifications.


australian software engineering conference | 1997

A comparative study of five regression testing algorithms

Ghinwa Baradhi; Nashat Mansour

We compare five regression testing algorithms that include: slicing, incremental, firewall, genetic and simulated annealing algorithms. The comparison is based on the following ten quantitative and qualitative criteria: execution time, number of selected retests, precision, inclusiveness, user parameters, handling of global variables, type of maintenance, type of testing, level of testing, and type of approach. The experimental results show that the five algorithms are suitable for different requirements of regression testing. Nevertheless, the incremental algorithm shows more favorable properties than the others.


Concurrency and Computation: Practice and Experience | 1992

Allocating data to multicomputer nodes by physical optimization algorithms for loosely synchronous computations

Nashat Mansour; Geoffrey C. Fox

Three optimization methods derived from natural sciences are considered for allocating data to multicomputer nodes. These are simulated annealing, genetic algorithms and neural networks. A number of design choices and the addition of preprocessing and postprocessing steps lead to versions of the algorithms which differ in solution qualities and execution times. In this paper the performances of these versions are critically evaluated and compared for test cases with different features. The performance criteria are solution quality, execution time, robustness, bias and parallelizability. Experimental results show that the physical algorithms produce better solutions than those of recursive bisection methods and that they have diverse properties. Hence, different algorithms would be suitable for different applications. For example, the annealing and genetic algorithms produce better solutions and do not show a bias towards particular problem structures, but they are slower than the neural network algorithms. Preprocessing graph contraction is one of the additional steps suggested for the physical methods. It produces a significant reduction in execution time, which is necessary for their applicability to large problems.

Collaboration


Dive into the Nashat Mansour's collaboration.

Top Co-Authors

Avatar

Geoffrey C. Fox

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Ramzi A. Haraty

Lebanese American University

View shared research outputs
Top Co-Authors

Avatar

Hassan Khachfe

Lebanese International University

View shared research outputs
Top Co-Authors

Avatar

Hassan Diab

American University of Beirut

View shared research outputs
Top Co-Authors

Avatar

Abbas Tarhini

University of Reims Champagne-Ardenne

View shared research outputs
Top Co-Authors

Avatar

Fatima Kanj

Lebanese American University

View shared research outputs
Top Co-Authors

Avatar

Ahmad Faour

Lebanese American University

View shared research outputs
Top Co-Authors

Avatar

Jalal Kawash

Lebanese American University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge