Mamdouh Alenezi
Prince Sultan University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mamdouh Alenezi.
International Journal of Computers and Applications | 2014
Mamdouh Alenezi; Kenneth Magel
Abstract Coupling, a measure of the interdependence among software entities, is an important property for which many software metrics have been defined. It is widely agreed that the extent of coupling in an object-oriented system has implications for its external quality. Structural and semantic relations between classes can be measured directly from static source code. However, both have limitations. In order to understand which aspects of coupling affect quality or other external attributes of software, this paper presents a new coupling metric for object-oriented systems that analyze structural and semantic relationships between methods and classes. The paper investigates the use of the new proposed coupling metric during change impact analysis, predicting fault-prone and maintainable classes. By comparing the new metric to other coupling metrics, we show that the new metric is a better predictor for classes impacted by changes. The new metric also shows good promise in predicting both external qualities (fault proneness and maintainability).
International Journal of Cloud Applications and Computing archive | 2015
Mamdouh Alenezi; Fakhry M. Khellah
Software systems usually evolve constantly, which requires constant development and maintenance. Subsequently, the architecture of these systems tends to degrade with time. Therefore, stability is a key measure for evaluating an architecture. Open-source software systems are becoming progressively vital these days. Since open-source software systems are usually developed in a different management style, the quality of their architectures needs to be studied. ISO/IEC SQuaRe quality standard characterized stability as one of the sub-characteristics of maintainability. Unstable software architecture could cause the software to require high maintenance cost and effort. In this work, the authors propose a simple, yet efficient, technique that is based on carefully aggregating the package level stability in order to measure the change in the architecture level stability as the architecture evolution happens. The proposed method can be used to further study the cause behind the positive or negative architecture stability changes.
Proceedings of the The International Conference on Engineering & MIS 2015 | 2015
Mamdouh Alenezi; Mohammad Zarour
Throughout the software evolution, several maintenance actions such as adding new features, fixing problems, improving the design might negatively or positively affect the software design quality. Quality degradation, if not handled in the right time, can accumulate and cause serious problems for future maintenance effort. In this work, we study the modularity evolution of two open-source systems by answering two main research questions namely: what measures can be used to measure the modularity level of software and secondly, did the modularity level for the selected open source software improves over time. By investigating the modularity measures, we have identified the main measures that can be used to measure software modularity. Based on our analysis, the modularity of these two systems is not improving over time.
international conference on machine learning and applications | 2013
Mamdouh Alenezi; Shadi Banitaan
Large open source bug tracking systems receives large number of bug reports daily. Managing these huge numbers of incoming bug reports is a challenging task. Dealing with these reports manually consumes time and resources which leads to delaying the resolution of important bugs which are crucial and need to be identified and resolved earlier. Bug triaging is an important process in software maintenance. Some bugs are important and need to be fixed right away, whereas others are minor and their fixes could be postponed until resources are available. Most automatic bug assignment approaches do not take the priority of bug reports in their consideration. Assigning bug reports based on their priority may play an important role in enhancing the bug triaging process. In this paper, we present an approach to predict the priority of a reported bug using different machine learning algorithms namely Naive Bayes, Decision Trees, and Random Forest. We also investigate the effect of using two feature sets on the classification accuracy. We conduct experimental evaluation using open-source projects namely Eclipse and Fire fox. The experimental evaluation shows that the proposed approach is feasible in predicting the priority of bug reports. It also shows that feature-set-2 outperformsfeature-set-1. Moreover, both Random Forests and Decision Trees outperform Naive Bayes.
Proceedings of the The International Conference on Engineering & MIS 2015 | 2015
Mamdouh Alenezi; Fakhry M. Khellah
Open-source software systems are becoming progressively vital these days. Since open-source softwares are usually developed in a different management style, the quality of their architectures needs to be studied. ISO/IEC SQuaRe quality standard characterized stability as one of the sub-characteristics of maintainability. Unstable software architecture could cause the software to require high maintenance cost and effort. Almost all stability related studies target the package level. To our knowledge, there has been no proposed work in literature that addresses the stability at the system architecture level. In this work, we propose a simple, yet efficient, technique that is based on carefully aggregating the package level stability in order to measure the change in the architecture level stability as the architecture evolution happens. The proposed method can be used to further study the cause behind the positive or negative architecture stability changes.
international conference on information technology: new generations | 2013
Shadi Banitaan; Mamdouh Alenezi; Kendall E. Nygard; Kenneth Magel
The aim of integration testing is to uncover errors in the interactions between system modules. However, it is generally impossible to test all the interactions between modules because of time and cost constraints. Thus, it is important to focus the testing on the connections presumed to be more error-prone. The goal of this research is to guide quality assurance team wherein a software system to focus when they perform integration testing to save time and resources. In this work, we use method level metrics that capture both dependencies and internal complexity of methods. In addition, we build a tool that calculates the metrics automatically. We also propose an approach to select the test focus in integration testing. The main goal is to reduce the number of test cases needed while still detecting at least 80% of integration errors. We conducted an experimental study on several Java applications taken from different domains. Error seeding technique have been used for evaluation. The experimental results showed that our proposed approach is very effective for selecting the test focus in integration testing. It reduces considerably the number of required test cases while at the same time detects at least 80% of integration errors.
2016 International Conference on Engineering & MIS (ICEMIS) | 2016
Mamdouh Alenezi; Yasir Javed
In this paper, we have tested several open source web applications against common security vulnerabilities. These vulnerabilities spans from unnecessary data member declaration to leaving gaps for SQL injection. The static security vulnerabilities testing was done in three categories (1) Dodgy code vulnerabilities (2) Malicious code vulnerabilities (3) Security code vulnerabilities on seven (7) different web applications built in Java. It is evident from the obtained results that almost all selected applications have similar kind of vulnerabilities that might have been introduced due to hasty programming or lack of developer knowledge against security vulnerabilities. We recommend to create an intelligent development framework that can provide suggestions for secure development by overcoming common vulnerabilities, can add missing code and can learn from expert developers practices to overcome the security vulnerabilities.
Archive | 2018
Mamdouh Alenezi; Iman Almomani
Recently, with the purpose of helping developers reduce the needed effort to build highly secure software, researchers have proposed a number of vulnerable source code prediction models that are built on different kinds of features. Identifying security vulnerabilities along with differentiating non-vulnerable from a vulnerable code is not an easy task. Commonly, security vulnerabilities remain dormant until they are exploited. Software metrics have been widely used to predict and indicate several quality characteristics about software, but the question at hand is whether they can recognize vulnerable code from non-vulnerable ones. In this work, we conduct a study on static code metrics, their interdependency, and their relationship with security vulnerabilities in Android applications. The aim of the study is to understand: (i) the correlation between static software metrics; (ii) the ability of these metrics to predict security vulnerabilities, and (iii) which are the most informative and discriminative metrics that allow identifying vulnerable units of code.
Procedia Computer Science | 2017
Khaled Almustafa; Mamdouh Alenezi
Abstract Two complementary architectures, software defined networking (SDN) and network function virtualization (NFV) are emerging to comprehensively address several networking issues. In this work, we introduce the most embraced virtualization concepts proposed by SDN and NFV architectures. We quantitatively evaluate hardware and energy cost savings with these two SDN and NFV architectures compared to the existing state-of-the-art network 4G hardware technologies.
Proceedings of the The International Conference on Engineering & MIS 2015 | 2015
Ibrahim Abunadi; Mamdouh Alenezi
Building secure software is challenging, time-consuming, and expensive. Software vulnerability prediction models that identify vulnerable software components are usually used to focus security efforts, with the aim of helping to reduce the time and effort needed to secure software. Existing vulnerability prediction models use process or product metrics and machine learning techniques to identify vulnerable software components. Cross project vulnerability prediction plays a significant role in appraising the most likely vulnerable software components, specifically for new or inactive projects. Little effort has been spent to deliver clear guidelines on how to choose the training data for project vulnerability prediction. In this work, we present an empirical study aiming at clarifying how useful cross project prediction techniques in predicting software vulnerabilities. Our study employs the classification provided by different machine learning techniques to improve the detection of vulnerable components. We have elaborately compared the prediction performance of five well-known classifiers. The study is conducted on a publicly available dataset of several PHP open source web applications and in the context of cross project vulnerability prediction, which represents one of the main challenges in the vulnerability prediction field.